Work Experience

Monokera
Data Engineer
Feb. 2025 — Current. 🔹Development
- 🔹Python
- 🔹Ruby
- 🔹SQL
- 🔹NoSQL
- 🔹Containers
- 🔹Git
- 🔹GNU/Linux
🔹ETL Frameworks
- 🔹Apache Spark
- 🔹Apache Airflow
- 🔹PySpark
- 🔹Pandas
🔹Data Bases
- 🔹SQL: PostgreSQL
- 🔹NoSQL: Atena
🔹DataViz Tools
- 🔹Quicksight
🔹Cloud Tools
- 🔹AWS: S3
- 🔹AWS: Athena
- 🔹AWS: Glue
- 🔹AWS: Lambda
- 🔹AWS: CloudWatch
- Spearheaded the resolution of complex data incidents and actively optimized critical data pipelines, encompassing root cause analysis, performance tuning, and resource allocation to ensure high data availability and robust system performance for real-time operational and analytical needs.
- Designed, developed, and comprehensively refactored intricate ETL/ELT pipelines and advanced SQL queries using Python (PySpark), streamlining data ingestion from diverse sources and transforming large datasets to support critical business intelligence reporting and strategic decision-making.
- Established and implemented comprehensive data validation frameworks, along with rigorous documentation standards, significantly enhancing data accuracy, reliability, and fostering seamless knowledge transfer, which minimized troubleshooting time and improved overall data governance across technical and business teams.

INTER
Pro fessor
Aug. 2023 — Jun. 2025 🔹Development
- 🔹Python
- 🔹Java
- 🔹Javascript
- 🔹Go
- 🔹SQL
- 🔹NoSQL
- 🔹Containers
- 🔹Git
🔹Data Bases
- 🔹SQL: PostgreSQL
- 🔹SQL: MySQL
- 🔹SQL: Oracle
- 🔹SQL: MS SQL Server
- 🔹NoSQL: MongoDB
- Developed and delivered comprehensive coursework on data structures, data analysis, relational databases, non-relational databases, security and architecture in applications, software testing, and algorithms & programming 1, emphasizing practical applications and real-world problem-solving.
- Guided students through hands-on projects, including the development of console-based games (Chess, Sokoban) and an analytics API for social media, fostering critical thinking, algorithmic application, and practical development skills.
- Led a curriculum redesign initiative to integrate the Data Analysis course and NoSQL databases (MongoDB), guiding students through relational data model migration to NoSQL schema designs and performance optimization.

Dentsu
Data Lead
Aug. 2023 — Feb. 2024 🔹Development
- 🔹Python
- 🔹Rust
- 🔹SQL
- 🔹NoSQL
- 🔹Containers
- 🔹GNU/Linux
- 🔹Git
🔹Data Warehouses
- 🔹SQL: PostgreSQL
- 🔹NoSQL: MongoDB
🔹DataViz Tools
- 🔹Power BI
- 🔹Looker
🔹ETL Frameworks
- 🔹Apache Spark
- 🔹Apache Airflow
- 🔹PySpark
🔹Cloud Tools
- 🔹AWS: S3
- 🔹AWS: Redshift
- 🔹AWS: EC2
- 🔹AWS: Lambda
- Spearheaded the development and optimization of large-scale data pipelines to support advanced analytics initiatives across the organization. By introducing innovative data processing techniques and streamlining existing workflows, I achieved a 40% reduction in data latency, empowering business intelligence and analytics teams with faster access to actionable insights and enabling more timely decision-making.
- Collaborated closely with stakeholders across marketing, operations, and product development departments to define and implement data-driven strategies. I designed and delivered state-of-the-art dashboards and reports using tools like PowerBI and Looker, providing clear visualizations and insightful analyses of key performance indicators. These efforts demonstrably enhanced decision-making processes and contributed to a 20% increase in overall campaign efficiency.
- Oversaw the transformation of traditional, manual workflows into modern, automated, and data-centric approaches. By implementing advanced data platforms on AWS (S3, EC2, Lambda, Redshift, RDS) and leveraging automation tools like Apache Airflow, I significantly improved operational efficiency by 25% and enhanced the accuracy and reliability of reporting processes, minimizing manual errors and freeing up valuable time for strategic initiatives.

HAVAS Media Group
Data Developer
Dec. 2022 — Jul. 2023 🔹Development
- 🔹Python
- 🔹NoSQL
- 🔹Containers
- 🔹GNU/Linux
- 🔹Git
🔹Data Warehouses
- 🔹NoSQL: BigQuery
🔹DataViz Tools
- 🔹Power BI
- 🔹Looker
🔹ETL Frameworks
- 🔹Apache Spark
- 🔹Apache Airflow
- 🔹PySpark
- 🔹Apache Hive
- 🔹Apache Hadoop
🔹Cloud Tools
- 🔹GCP: BigQuery
- 🔹GCP: Cloud Storage
- 🔹GCP: Cloud Functions
- 🔹GCP: Composer
- 🔹GCP: Cloud Run
- 🔹GCP: Cloud SQL
- 🔹GCP: Dataproc
- Designed and deployed highly scalable data solutions tailored to support high-volume data environments, with a primary focus on automating ETL (Extract, Transform, Load) pipelines to streamline data operations and improve data accessibility. These efforts resulted in a significant 50% reduction in manual processing time, freeing up data engineers for more complex tasks and enhancing data availability for critical business analytics and reporting.
- Optimized backend data systems, including databases and data processing frameworks (Apache Spark, Apache Hadoop, Apache Hive), to efficiently handle complex queries and extremely large datasets. These optimizations resulted in a 35% improvement in overall system performance, significantly enhancing real-time reporting capabilities and facilitating the delivery of timely and accurate business insights to stakeholders.
- Focused on building strong working relationships with internal stakeholders across various departments, actively engaging in communication to ensure that technical solutions effectively aligned with overarching business objectives. Delivered targeted training and ongoing support to non-technical teams on data tools and reporting methodologies, empowering them to better utilize available data resources and fostering a strong culture of data-driven decision-making throughout the organization.

KLYM
Data Engineer
May. 2022 — Dec. 2022 🔹Development
- 🔹Python
- 🔹Django
- 🔹NoSQL
- 🔹Containers
- 🔹GNU/Linux
- 🔹Git
🔹Data Bases
- 🔹SQL: PostgreSQL
- 🔹NoSQL: MongoDB
- 🔹NoSQL: Redis
🔹ETL Frameworks
- 🔹Apache Spark
- 🔹Apache Airflow
- 🔹PySpark
🔹Cloud Tools
- 🔹AWS: S3
- 🔹AWS: Redshift
- 🔹AWS: EC2
- 🔹AWS: Lambda
- Developed and maintained scalable and robust data pipelines using Python, Apache Airflow, and AWS services (S3, EC2, Lambda, Redshift, RDS), designed to process millions of data points on a daily basis. This system ensured seamless and reliable data flow, providing critical support for real-time analytics dashboards and machine learning initiatives that drove key business decisions.
- Reengineered existing database schemas and implemented advanced indexing strategies within PostgreSQL databases to optimize query performance. These improvements resulted in a 25% reduction in average query execution times, enabling faster access to critical business metrics and significantly enhancing overall decision-making efficiency for business analysts and other data consumers.
- Worked closely with data scientists and business analysts to implement and integrate predictive models into production workflows. This collaborative effort facilitated the translation of complex analytical insights into actionable business strategies, leading to improvements in key business processes and increased customer satisfaction.

NTT Data
Data Engineer
Oct. 2021 — Apr. 2022 🔹Development
- 🔹Python
- 🔹Scala
- 🔹NoSQL
- 🔹Containers
- 🔹GNU/Linux
- 🔹Git
🔹Data Warehouses
- 🔹SQL: PostgreSQL
- 🔹SQL: MySQL
- 🔹SQL: Oracle
- 🔹SQL: MS SQL Server
- 🔹NoSQL: BigQuery
🔹DataViz Tools
- 🔹Power BI
- 🔹Looker
🔹ETL Frameworks
- 🔹Apache Spark
- 🔹Apache Airflow
- 🔹PySpark
🔹Cloud Tools
- 🔹AWS: S3
- 🔹AWS: Redshift
- 🔹AWS: EC2
- 🔹AWS: Lambda
- 🔹GCP: BigQuery
- 🔹GCP: Cloud Storage
- 🔹GCP: Cloud Functions
- 🔹GCP: Composer
- 🔹GCP: Cloud Run
- 🔹GCP: Cloud SQL
- 🔹GCP: Pub/Sub
- 🔹GCP: Dataflow
- 🔹GCP: Dataproc
- 🔹MS Azure: Data Factory
- 🔹MS Azure: Synapse
- 🔹MS Azure: Synapse Analytics
- 🔹MS Azure: Synapse SQL
- 🔹MS Azure: Synapse Spark
- Designed and implemented robust data processing frameworks using technologies like Apache Spark, Apache Hadoop, and GCP services (BigQuery, Cloud Storage, Cloud Functions) to effectively handle high-velocity and high-volume data streams. These solutions enabled the generation of real-time business insights, which directly impacted strategic initiatives and improved customer engagement strategies.
- Led the migration of several legacy on-premise data systems to modern cloud-based platforms on AWS and GCP. This strategic migration reduced infrastructure costs significantly, improved system scalability and reliability, and minimized downtime during the transition, ensuring uninterrupted service availability for business operations.
- Collaborated closely with data scientists and IT operations teams to optimize workflows for efficient model deployment and integration into production environments. These streamlined processes significantly improved both the accuracy and speed of delivering AI-driven insights to production systems, enabling faster and more effective implementation of data-driven solutions.

Universidad Autonoma de Zacatecas
Business Practices in AI
Mar. 2021 — Jul. 2021 🔹Development
- 🔹Python
- 🔹NoSQL
- 🔹Containers
- 🔹GNU/Linux
- 🔹Git
🔹AI Tools
- 🔹TensorFlow
- 🔹Keras
- 🔹Scikit-learn
- 🔹TensorFlow-Audio
- Led the development and implementation of a deep learning model for emotion recognition using voice data. By employing advanced optimization techniques, including hyperparameter tuning and model architecture exploration, and conducting rigorous cross-validation, I achieved a 65% accuracy rate, exceeding initial project expectations and demonstrating a strong understanding of deep learning principles and audio processing.
- Integrated developed machine learning models into real-time systems to enhance user experiences and deliver innovative product features. By leveraging AI capabilities to personalize user interactions and provide context-aware recommendations, the implementation of these models demonstrably improved user satisfaction metrics by 15%, demonstrating the value of AI in enhancing user engagement.
- Prioritized and ensured strict adherence to data ethics and privacy standards throughout the lifecycle of data management and model development, particularly when working with large-scale datasets. This commitment to responsible AI practices ensured the delivery of robust and compliant AI models that balanced high performance with adherence to all relevant regulatory requirements and ethical considerations.

Rollup Consulting
Python Developer
Nov. 2019 — Oct. 2021 🔹DataViz Tools
- 🔹Power BI
- 🔹QlikSense
- 🔹Looker
- 🔹Python Dash
🔹Data Warehouses
- 🔹SQL: PostgreSQL
- 🔹SQL: MySQL
- 🔹SQL: Oracle
- 🔹SQL: MS SQL Server
🔹Development
- 🔹Python
- 🔹SQL
- 🔹Containers
- 🔹GNU/Linux
- 🔹Git
- Designed, developed, and implemented various Python-based applications, placing a strong emphasis on performance optimization and scalability to meet the demands of growing user bases and increasing data volumes. Through effective code optimization techniques, including algorithmic improvements and efficient data structures, I enhanced overall system efficiency by reducing processing times by 20%.
- Developed and maintained RESTful APIs using frameworks like Django and FastAPI to facilitate seamless integration with various third-party systems and services. This improved interoperability between different software components and significantly reduced deployment times for new features and business solutions, enabling faster delivery of value to clients.
- Led the migration of legacy systems to modern cloud-based architectures, minimizing disruptions to ongoing operations and ensuring smooth transitions for users. During these migrations, I introduced robust security measures, including authentication, authorization, and data encryption, which significantly fortified system reliability and ensured compliance with industry security standards and best practices.

Rollup Consulting
BI Consultant
Apr. 2017 — Oct. 2021 🔹DataViz Tools
- 🔹Power BI
- 🔹QlikSense
- 🔹Looker
- 🔹Python Dash
🔹Data Warehouses
- 🔹SQL: PostgreSQL
- 🔹SQL: MySQL
- 🔹SQL: Oracle
- 🔹SQL: MS SQL Server
🔹ETL Tools
- 🔹Python
- 🔹SQL
- 🔹Containers
- 🔹GNU/Linux
- 🔹IBM Cognos Analytics
- Designed and deployed advanced business intelligence dashboards and scalable data warehouses to provide comprehensive support for real-time analytics and reporting. By delivering actionable insights derived from the data, I contributed to a 15% increase in operational efficiency across key departments within the organization.
- Streamlined existing ETL (Extract, Transform, Load) workflows by integrating various automation tools and techniques, including scripting and workflow orchestration platforms. These improvements resulted in a 35% reduction in data processing times, significantly enhancing data reliability and consistency for both internal and external reporting requirements.
- Conducted extensive training sessions and workshops on business intelligence tools, data analysis techniques, and reporting methodologies for non-technical teams across the organization. This initiative significantly improved overall data literacy within the company and enabled a broader organizational shift towards data-driven decision-making at all levels.