Senior AWS Data Engineer

Job ID: 112213
Location: Richardson, Texas  [On-Site]
Category: App/Dev
Employment Type: Contract
Date Added: 03/13/2026

Apply Now

Fill out the form below to submit your information for this opportunity. Please upload your resume as a doc, pdf, rtf or txt file. Your information will be processed as soon as possible.


 
 
 
 
 
(Word, PDF, RTF, TXT)
* Required field.

This senior-level data engineering role requires designing and implementing scalable data architectures using AWS cloud services. The position involves developing data lakehouse solutions with Apache Iceberg, building robust data pipelines, and managing APIs to facilitate secure data access. The ideal candidate will collaborate closely with cross-functional teams to ensure data quality, security, and performance within a cloud-based environment.

Responsibilities

  • Design and develop scalable, reliable data lakehouse architectures on AWS using Apache Iceberg and related services.
  • Build, automate, and maintain ETL/ELT data pipelines to ingest data from multiple sources into the AWS ecosystem.
  • Create and manage secure, scalable RESTful APIs for internal data access leveraging AWS services such as API Gateway and Lambda.
  • Utilize AWS tools including S3, EMR, Lake Formation, and others to process, store, and analyze data efficiently.
  • Build and manage Apache Iceberg tables on Amazon S3 to enable advanced data lake features like ACID transactions, schema evolution, and time travel.
  • Optimize data performance through partitioning, data compaction, and fine-tuning Iceberg tables.
  • Implement data validation, error handling, and quality assurance processes to maintain data integrity.
  • Enforce data security and compliance measures, including access controls, encryption, and authorization via IAM or Cognito.
  • Collaborate with data scientists, analysts, and engineering teams to understand data requirements and deliver effective solutions.
  • Maintain detailed technical documentation for data workflows, API endpoints, and processes.

Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • Proven experience in data engineering, particularly with AWS cloud services.
  • Hands-on experience building and managing Apache Iceberg tables.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Strong SQL skills for data querying, modeling, and database design.
  • In-depth knowledge of AWS services including S3, EMR, Lambda, API Gateway, SageMaker, and IAM.
  • Experience with big data technologies like Apache Spark and Hadoop.
  • Demonstrated experience developing and deploying RESTful APIs with a focus on security and performance.
  • Familiarity with ETL tools and workflow orchestration platforms such as Apache Airflow.
  • Knowledge of DevOps practices, CI/CD pipelines, and infrastructure as code (Terraform).
  • Strong problem-solving, analytical, and communication skills.
  • Ability to work independently and effectively in agile team environments.
  • Certifications such as AWS Certified Data Analytics or AWS Certified Data Engineer – Specialty are preferred.

Publishing Pay Range: $85.00 – $91.77 hourly
This position is based in office and requires employee to work on-site.