AWS Cloud Data Engineer

Job ID: 112257
Location: Richardson, Texas  [Hybrid]
Category: App/Dev
Employment Type: Contract
Date Added: 03/23/2026

Apply Now

Fill out the form below to submit your information for this opportunity. Please upload your resume as a doc, pdf, rtf or txt file. Your information will be processed as soon as possible.


 
 
 
 
 
(Word, PDF, RTF, TXT)
* Required field.

We are seeking an experienced AWS Cloud Data Engineer to design and implement innovative data architectures that empower our organization’s insights and decision-making. This is an exciting opportunity to work at the forefront of cloud data engineering, utilizing cutting-edge AWS services and open-source technologies.

Key Responsibilities:

  • Design and Develop Data Architecture: Create scalable, resilient, and efficient data lakehouse solutions on AWS, leveraging Apache Iceberg, AWS native services, and Snowflake to meet complex business requirements.
  • Build and Maintain Data Pipelines: Develop, automate, and optimize ETL/ELT workflows to ingest and process data from diverse sources into our AWS and Snowflake ecosystem, ensuring high data quality and timeliness.
  • Create and Manage Data APIs: Design and maintain secure, scalable RESTful APIs and other data access endpoints, enabling seamless integration for internal teams and applications using AWS services.
  • Implement AWS Data Services: Utilize Amazon S3, Amazon EMR, AWS Lake Formation, and other AWS tools to process, store, and analyze data efficiently, with native support for Iceberg tables.
  • Manage Apache Iceberg Tables: Build and oversee Iceberg tables on Amazon S3, facilitating advanced data lakehouse features such as ACID transactions, schema evolution, and time travel.
  • Optimize Data Performance: Apply partitioning strategies, data compaction, and performance tuning techniques to enhance query speed and reduce latency.
  • Ensure Data Quality & Security: Implement rigorous data validation, error handling, and security measures, including access control via AWS Lake Formation, IAM, and Cognito, to ensure compliance with data protection standards.
  • Collaborate Across Teams: Work closely with data scientists, analysts, software engineers, and business stakeholders to understand their data needs and deliver tailored solutions.
  • Provide Technical Support & Documentation: Troubleshoot data pipeline and API issues effectively, and maintain comprehensive technical documentation for workflows, processes, and API specifications.

Qualifications & Skills:

  • Proven experience in designing and deploying data architectures on AWS, with familiarity in Iceberg, Snowflake, and related tools.
  • Strong programming skills in Python, Scala, or Java for data pipeline development.
  • Hands-on experience with AWS data services such as S3, EMR, Glue, and Lake Formation.
  • Knowledge of data modeling, schema design, and performance optimization for large datasets.
  • Understanding of security best practices for cloud data environments, including access controls and compliance standards.
  • Excellent communication skills to collaborate effectively with cross-functional teams.

Publishing Pay Range: $70.00 – $79.11 Hourly