Each phase was a small step, building the holistic perspective and technical knowledge essential for developing AI solutions. My initial incursion into full-stack and backend development (PHP, Python, Node.js, Java, C#) provided a robust understanding of software fundamentals. However, it was my evolution into roles demanding a holistic architectural view guided by international experience and a focus on sustainable, clean code, cohesion, and small coupling.
The subsequent transition into DevOps automation (Terraform, AWS CDK, CDK8s) was driven by a desire to see code perform optimally in production. This phase equipped me with skills in building reliable, scalable, and deployable systems using distributed systems principles directly transferable to architecting and deploying robust AI models and pipelines. Managing legacy systems and tackling technical debt further honed my ability to optimize and innovate within existing infrastructures. This comprehensive journey, from crafting individual components to orchestrating entire production environments, has uniquely prepared me for AI Engineering.
My decade of experience isn't just a collection of roles; it's a curated skill set perfectly aligned with the demands of AI. I am actively seeking an AI Engineering role where I can leverage my decade of experience in building scalable, reliable software systems and my passion for automation to develop and deploy impactful AI solutions. Suppose your organization values a pragmatic, experienced engineer who can bridge the gap between foundational software architecture and cutting-edge AI. In that case, I am eager to connect and explore how I can contribute to your vision.
Senior Software Engineer at TelevisaUnivision
Aug 2023 - Present
This role involved enhancing backend services for a large-scale streaming platform, emphasizing robust integrations, optimized task handling, and DevOps practices, all of which are foundational for AI engineering, particularly in MLOps and data-intensive AI applications.
-
Data Pipeline & Processing for AI/ML:
- Designed and integrated a video processing pipeline using Nest.js and an ETL process with MongoDB, increasing processing capacity from 2K to 5K assets per day through Quickplay API integration. This demonstrates experience in building scalable data pipelines crucial for ingesting and preparing data for AI models.
- Developed a Fastify-based scheduler backend enabling the content team to schedule live sports events using GCP Tasks, reliably supporting over 5 million active users. This showcases skills in building real-time systems capable of handling large data streams and user loads, relevant for live AI applications.
- Enhanced a live event capture pipeline for reliable recording and storage in GCP Storage, achieving a 99% capture rate. This is akin to ensuring robust data collection for training AI models.
- Migrated legacy SQL data pipelines to a flexible NoSQL (MongoDB) architecture, significantly enhancing data processing flexibility and team productivity for data-driven features.
- Designed and implemented a MongoDB database for event metadata, applying performance optimizations that reduced data fetching latency by 40%, critical for efficient data access in AI systems.
-
MLOps & System Scalability:
- Restructured GCP task queues, introducing retry logic and validation layers to minimize message loss and task duplication (decreasing task error rates by 30%), skills transferable to building resilient data processing and inference pipelines in AI.
- Built a notification system using GitHub Actions and the Slack API, improving team visibility into production changes and deployment health, aligning with MLOps principles for monitoring and CI/CD.
- Led an initiative to improve codebase quality by prioritizing unit testing and adopting TDD techniques (Jest, Mocha, Chai), resulting in a measurable reduction in production bugs, fostering a culture of quality essential for reliable AI systems.
- Developed high-performance GraphQL APIs with NestJS and Apollo, reducing client data over-fetching by 40% and improving query response times by 30%, relevant for efficient data interaction with AI models.
-
Collaboration & Best Practices:
- Implemented Architectural Decision Records (ADRs) to document feature specifications and technical decisions, streamlining onboarding and accelerating feature development, showcasing strong communication and documentation skills vital in collaborative AI projects.
Back-end Lead at Sonatafy Technology
Sep 2022 - Aug 2023
As Back-end Lead, you modernized legacy systems and implemented new integrations for complex financial operations, focusing on automation, scalability, and robust API design.
-
Data Engineering & AI System Integration:
- Designed and implemented a RESTful backend architecture from scratch using Nest.js for managing financial records, storing data in Oracle DB, and integrating with Microsoft Business Central API. This reduced report generation time by 50% and enhanced data accuracy for 25 weekly reports, demonstrating skills in building data-centric applications.
- Implemented an event-driven design for the REST API using a Redis-based queue, enabling real-time monitoring of financial statements. This reduced processing errors by 30% and improved system uptime to 99.9%, skills applicable to real-time AI model monitoring and data processing.
- Designed a MongoDB data structure to archive historical processing data and extract pipeline performance metrics, automating bi-weekly email reports. This improved data processing speeds by 25% and ensured accurate reporting for over 10,000 daily transactions, relevant for MLOps and data pipeline analytics.
- Replaced a legacy Salesforce data loader with a Python-based solution using Salesforce REST APIs. This upgrade reduced data load errors by 40%, improved processing time by 30%, and streamlined integration, showcasing Python proficiency for data tasks.
-
Architectural Design & Performance for AI:
- Proposed and led the refactoring of legacy batch processing, improving data handling and execution times through optimized queries and memory-efficient designs, particularly in Oracle DB, crucial for efficient AI model training and batch inference.
- Created APIs that enabled a frictionless mobile experience, allowing seamless communication with backend services, translatable to building APIs for AI model consumption.
-
Leadership & Documentation:
- Created technical diagrams and architecture documents using Microsoft Visio, detailing integration flows and system components to support maintainability and cross-team collaboration, demonstrating essential documentation skills for complex AI projects.
DevOps Engineer at Auditmation (formerly Neverfail)
Sep 2021 - Sep 2022
This role focused on designing and implementing scalable, efficient infrastructure solutions, emphasizing IaC, GitOps, and cloud services, which are core to MLOps and managing AI infrastructure.
-
MLOps & Infrastructure Automation:
- Created and maintained Terraform modules to automate deliverable artifact deployment, enabling a 200% increase in deployment frequency by integrating with AWS services like ECS and CodeBuild. This is directly applicable to automating AI model deployment pipelines.
- Designed and implemented a GitOps-driven CI/CD pipeline using ArgoCD and Kubernetes, automating the deployment and management of containerized applications, a key practice in MLOps for managing AI application lifecycles.
- Optimized Docker artifact security by employing Linux Alpine-based images and filtering vulnerable packages, achieving a 60% reduction in vulnerabilities and aligning with OWASP Top 10 guidelines. This is vital for securing AI models and their dependencies.
- Implemented a robust CI/CD process for monorepo artifacts, leveraging Docker and Jfrog for artifact management, which reduced production errors by 30%, ensuring higher-quality AI model deployments.
-
Scalable & Secure Cloud Solutions for AI:
- Configured AWS SQS and AWS Lambda to create an event-driven, asynchronous processing architecture, enhancing scalability and system responsiveness. This pattern is often used in AI for scalable data processing and model inference.
- Implemented AWS Single Sign-On (SSO) across multiple AWS accounts, centralizing user authentication and simplifying access control for cloud resources, critical for securing access to AI development and production environments.
- Developed a nightly monitoring job integrating with JIRA REST API to detect and notify security leaks, resulting in an 80% reduction in overall vulnerabilities, important for maintaining the security of AI systems and data.
-
Collaboration & Best Practices for AI Teams:
- Implemented comprehensive Architectural Decision Records (ADRs) and flow diagrams to document internal processes, enhancing stakeholder understanding and streamlining onboarding, vital for clear communication in AI projects.
Back-end Lead & DevOps at DevBase
Oct 2019 - Sep 2021
In this dual role, you focused on automating infrastructure, designing robust systems, and integrating various AWS services, laying a strong groundwork for managing AI/ML workloads and data pipelines.
-
Automated & Scalable Infrastructure for AI (IaC & Serverless):
- Automated infrastructure provisioning with Terraform on AWS, standardizing multi-environment deployments and boosting team productivity. This is foundational for creating reproducible and scalable AI environments.
- Delivered serverless solutions utilizing AWS Lambda that reduced internal process costs by 35%, showcasing experience relevant for cost-effective AI model serving and data processing tasks.
- Developed reusable Lambda layers to centralize common dependencies, accelerating deployment speed by 3X, improving efficiency in developing serverless AI components.
- Created Terraform modules for AWS ECS Fargate deployments, streamlining the delivery of components (REST API, SQS consumer, and backfiller app), applicable for deploying containerized AI applications.
- Architected database deployment modules using AWS RDS (Postgres for production and serverless Postgres for lower environments), reducing costs by 55%, essential for cost-efficiently managing data for AI.
-
Data Processing & CI/CD for AI Pipelines:
- Engineered an AWS SQS processor integrated into an ETL workflow, transforming and loading over 500K daily transactions into an AWS RDS Postgres database, demonstrating robust data ingestion and processing for AI.
- Implemented batch processing using AWS SQS combined with database aggregators to consolidate bills and invoices, reducing report generation times by 60%, transferable to efficient large-scale data processing for AI.
- Developed a CI/CD migration pipeline using an AWS ECS Task (the “Migrator”), cutting database inconsistency errors by 70%, critical for maintaining data integrity in AI systems.
- Configured an ELK stack (Elasticsearch, Logstash, Kibana) to streamline error debugging and centralize logs, important for monitoring and troubleshooting AI applications.
-
Secure & Resilient AI Systems:
- Designed and deployed a robust authentication system with AWS Cognito for role-based access and granular permissions.
- Pioneered a canary deployment process via AWS API Gateway to direct traffic between releases, significantly lowering deployment error rates, a best practice for safe AI model updates.
- Developed an AWS SQS-powered ETL processor that standardized and loaded more than 500K daily medical records into AWS RDS Postgres, ensuring consistent and secure processing of sensitive data.
-
Documentation & Collaboration:
- Authored Architectural Decision Records (ADR) to document feature changes and established ADR guidelines for classifying data formats, enhancing team alignment and knowledge transfer crucial for AI projects.
Senior Back-end Developr & DevOps Transformation at Supermassive
Oct 2017 - Oct 2019
This role involved building microservices, ETL processes for cryptocurrency data, and a significant transition into DevOps, focusing on automation, cloud optimization, and even an early application of AI services.
-
Data-Intensive Systems & Early AI Application:
- Developed a microservice system and ETL process to collect cryptocurrency data from different sources, find patterns, and notify 10k subscribers of investment opportunities, demonstrating skills in building systems for data collection, processing, and deriving insights, akin to AI predictive analytics.
- Built a web scraper to collect and analyze data from cryptocurrency exchanges and marketplaces, providing valuable insights to traders, showcasing data acquisition skills for AI model training.
- Implemented a feedback tool leveraging AWS Comprehend to analyze user sentiment from platform comments, providing direct experience with an AWS AI service for Natural Language Processing.
-
MLOps Foundations (Automation, IaC, CI/CD):
- Mastered TypeScript and API data validation, avoiding a 35% error rate in production environments, highlighting a focus on data quality crucial for AI.
- Created a CI/CD deployment pipeline using a serverless framework on AWS, reducing delivery times by 50%.
- Transitioned the microservice system to an Infrastructure as Code (IaC) approach, cutting deployment time by 35%, and implemented cost-optimization strategies that reduced AWS expenses by 40%.
- Set up comprehensive monitoring using New Relic to identify system bottlenecks, informing proactive provisioning of improved cloud resources.
-
Python & System Support:
- Provided comprehensive support for an automated Python-based trading system, reinforcing Python skills in a data-driven, automated context.
-
Cloud Infrastructure Skills (Highly relevant AWS services for AI/MLOps):
- Core contributions include automating infrastructure with Terraform on AWS, designing authentication with AWS Cognito, delivering serverless solutions with AWS Lambda (reducing costs by 35%), developing reusable Lambda layers (accelerating deployment 3X), integrating CloudWatch Events for automation, configuring ELK stack, pioneering canary deployments with API Gateway, creating Terraform modules for ECS Fargate, and leveraging numerous other AWS services (IAM, Route 53, ACM, CloudTrail, GuardDuty, KMS, WAF, Config, ECR, ELB, CloudFront, RDS, SNS, VPC) for security, deployment, and operations.
Back-end Developer at Strapp
Oct 2016 - Sep 2017
This early backend role focused on web application development, data management, and API creation, providing foundational skills in handling data and building application logic.
-
Data-Driven Features & ETL:
- Developed advanced features for a property manager using PHP Symfony and HTML5 to render consolidated data reports, doubling sales opportunities by providing users with real-time account status and insights. This shows early experience in leveraging data for business impact.
- Implemented a management and tracking dashboard for realtor interns, reducing errors by 30% through improved manager visibility, highlighting the value of data visualization and quality.
- Designed and implemented an advertising module to track and analyze metrics for over 15,000 users, demonstrating data collection and analysis skills.
- Built lightweight ETL scripts using Node.js to facilitate real-time sales reporting, showing early experience in data transformation and movement.
-
API Development & System Optimization:
- Built a scalable RESTful API using Node.js, JWT authentication, and AWS S3 for a file-sharing application, foundational for creating data access points for AI models.
- Demonstrated problem-solving by identifying and resolving critical bugs and performance bottlenecks, significantly improving application stability.
Junior Developer at 4Geeks
Oct 2015 - Oct 2016
This initial role, including the internship, marked the beginning of your software development journey, with early exposure to Python, ERP systems, and database management for laying the groundwork for more complex data handling.
-
Early Python & Data Integration Experience:
- Designed and developed customized client solutions using Python and Odoo, leveraging ERP systems to optimize workflows.
- Created endpoints and completed workflows using Django frameworks.
- Completed multiple ERP integration projects, enhancing operational efficiency by streamlining data flow.
- Contributed to improving internal HR tools (GeekFactory) using PHP, HTML5, and MySQL, integrated with Odoo for ERP functionalities.
- Developed a new Odoo module from scratch, delving into Python and Django frameworks, and worked with PostgreSQL.
- Successfully developed and deployed a Python-based application that streamlined data processing, achieving a 30% reduction in processing time.
-
Database Fundamentals & Optimization:
- Designed and implemented MySQL databases with normalization techniques.
- Enhanced database performance by applying indexing strategies and query optimization, reducing query execution time by 20%.
-
Soft Skills Development:
- Prepared and presented slides detailing a solution, boosting confidence and solidifying technical and communication skills.
- Participated in peer code reviews and collaborated with cross-functional teams, fostering a culture of continuous improvement and teamwork.