Data Engineer
Job ID: 111641
Location: Houston, Texas [On-Site]
Category: App/Dev
Employment Type: Contract
Date Added: 01/05/2026
Role Summary
A senior data engineer is responsible for designing, building, and optimizing large-scale, high-reliability data pipelines and lakehouse architectures. This senior-level position involves making key architectural decisions and implementing end-to-end data ingestion, transformation, and delivery solutions. The role requires a strong foundation in both data engineering and software engineering principles to deliver scalable, modular, and testable data systems that support analytics and business objectives.
Responsibilities
- Design, develop, and maintain ELT pipelines for data ingestion, transformation, modeling, and delivery across multiple layers (bronze, silver, gold).
- Implement incremental data loads, change-data-capture (CDC), merge/upsert, and idempotent pipeline patterns to ensure data reliability and repeatability.
- Define and apply data architecture patterns such as layered lakehouse structures, domain-oriented datasets, and semantic models aligned with business goals.
- Engineer physical data schemas including partitioning strategies, partition key selection, clustering, micro-partitioning, and compaction for optimal performance and cost efficiency.
- Develop curated datasets and data marts to facilitate analytics, reporting, and self-service business intelligence.
- Implement data quality checks, observability, lineage tracing, and monitoring to ensure data integrity and SLA adherence.
- Optimize query and system performance on cloud data platforms like Snowflake by leveraging tasks, streams, compute sizing, and query tuning.
- Manage Lakehouse table formats (e.g., Apache Iceberg, Delta Lake) on object storage, including schema evolution, maintenance, and versioning.
- Collaborate with data architects, analytics teams, and business stakeholders to translate requirements into effective data solutions.
- Lead design reviews, mentor junior engineers, and contribute to engineering standards, frameworks, and best practices.
- Automate data pipeline deployment, monitor data lifecycle, and apply DevOps principles such as CI/CD and infrastructure-as-code for continuous improvement.
Qualifications
- 7 to 10+ years of experience in data engineering or related software engineering roles with a focus on data systems.
- Strong expertise in designing and maintaining large-scale data pipelines and lakehouse architectures.
- Proficiency with cloud data platforms like Snowflake or similar, including performance tuning and resource management.
- Experience with data lake formats such as Apache Iceberg or Delta Lake, including schema evolution and maintenance.
- Knowledge of data pipeline patterns including CDC, upsert, merge, and incremental loads.
- Familiarity with data quality, observability, lineage, and monitoring tools and practices.
- Ability to work collaboratively with cross-functional teams and translate complex requirements into scalable solutions.
- Proven experience in mentoring junior engineers and leading architecture reviews.
- Robust understanding of DevOps practices related to data pipelines, including automation with CI/CD tools.
- Excellent communication skills and the ability to work effectively in a team environment.
Publishing Pay Range: $92.00 – $95.00 Hourly
This position is based in office and requires employee to work on-site.
