Sr Data Engineer

  • Buenos Aires, Ciudad Autónoma de Buenos Aires, Argentina
  • Full-Time
  • Remote
  • 5,000-6,000 USD / Month

Job Description:

Senior Data Engineer (Contractor through Deel)
Location: Remote Argentina, Uruguay, or Mexico
Contract Type: Contractor (through Deel)

Were looking for a Senior Data Engineer to join a global Data Engineering team focused on building and maintaining a large-scale data platform. This role is part of the Platform Operations group, responsible for ensuring the stability and reliability of data pipelines using modern technologies such as Databricks, Apache Airflow, and AWS.

Responsibilities

  • Data Architecture & Pipelines: Design, implement, and maintain scalable, high-performance data pipelines for real-time and batch processing.
  • ETL & Automation: Build Delta Lake pipelines using PySpark and Python, manage Airflow DAGs and Databricks jobs, and automate deployments through CI/CD processes.
  • Data Quality & Governance: Develop data quality frameworks with automated checks and alerts, ensuring compliance with governance and PII standards.
  • Operations & Monitoring: Provide on-call support for data operations, monitor pipeline executions, and troubleshoot issues to maintain platform reliability.
  • Collaboration: Partner with data analysts, data scientists, and other technical stakeholders to deliver clean, consistent, and timely data.
  • Technical Leadership: Contribute to best practices, documentation, and process improvements to ensure scalability and maintainability.

Required Skills

  • Bachelors or Masters degree in Computer Science, Information Technology, or related field.
  • Fluent English, both spoken and written.
  • 6+ years of professional experience building and maintaining production-grade data pipelines.
  • Strong programming skills in Python (including testing and packaging).
  • Advanced experience with PySpark and Apache Spark.
  • Expert-level knowledge of Databricks, Apache Airflow, and SQL.
  • Solid understanding of AWS services, particularly S3 and IAM.
  • Experience with version control and CI/CD pipelines for data engineering projects.

Nice to Have

  • Experience with Infrastructure as Code tools (Terraform or CloudFormation).
  • Familiarity with AWS services such as Glue, Athena, or EMR.
  • Experience integrating external data sources (Salesforce, Stitch, Kafka).
  • Knowledge of monitoring tools (Prometheus, Grafana, CloudWatch).
  • Experience with data governance, PII handling, and automation tools.
  • Interest or experience using AI-assisted tools like GitHub Copilot.

About the Opportunity

This role offers the chance to work in a high-impact environment where data is at the core of business decision-making. Youll be part of a collaborative, remote-first team that values technical excellence, autonomy, and continuous improvement.