At WHOOP, we are on a mission to unlock human performance. WHOOP empowers members to perform at a higher level through a deeper understanding of their bodies and daily lives.
WHOOP is seeking an experienced Data Engineer who thrives on innovation and takes ownership of building and evolving data systems at scale. In this role, you will design, build, and optimize scalable data pipelines and platforms that power our data driven insights. You will play a key role in shaping robust ELT architectures, improving reliability and performance, and influencing technical direction across the data platform. With a strong focus on modern AWS infrastructure and tooling such as Snowflake, DBT, Kafka, and Spark, you will help elevate our analytical and operational capabilities. If you are excited about using AI to improve developer productivity and drive meaningful impact, we want you to join our team.
RESPONSIBILITIES:
Design, build, and operate scalable ELT pipelines using Python and PySpark, with a focus on reliability, performance, and maintainabilityOwn and improve batch and streaming data systems using Spark and Kafka, including monitoring and resolving production data issuesDevelop and optimize Snowflake data models and DBT transformations to support analytics, experimentation, and trusted metricsPartner with data scientists, analysts, and product teams to translate business requirements into well-designed data solutionsContribute to the evolution of the data platform by improving observability, data quality, and engineering best practicesLeverage AI tools to accelerate development, improve code quality, and automate repetitive data engineering workflows
QUALIFICATIONS:
Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience3-5 years of professional experience building and operating ETL/ELT pipelines in production environmentsStrong proficiency in SQL and hands-on experience with modern data warehousing concepts and dimensional modelingProfessional experience using Python for data engineering, including writing clean, testable, and reusable codeExperience with DBT for data modeling, testing, and documentation is preferredExperience with Spark and Kafka for batch or streaming data processing is preferredStrong problem-solving skills, clear communication, and the ability to work independently while collaborating in an agile environmentComfort using AI tools such as Copilot or ChatGPT to improve efficiency throughout the software development lifecycle