bout Egen:
Egen is a fast-growing and entrepreneurial company with a data-first mindset. We bring together the best engineering talent working with the most advanced technology platforms, including Google Cloud and Salesforce, to help clients drive action and impact through data and insights. We are committed to being a place where the best people choose to work so they can apply their engineering and technology expertise to envision what is next for how data and platforms can change the world for the better. We are dedicated to learning, thrive on solving tough problems, and continually innovate to achieve fast, effective results. If this describes you, we want you on our team.
Want to learn more about life at Egen? Check out these resources in addition to the job description.
We are seeking a seasoned Data Engineer with a minimum of 2 - 5 years of experience to join our team. The ideal candidate will possess a deep understanding of large-scale data processing systems, with a focus on designing and building both batch and streaming data pipelines. You will play a vital role in managing our data and ensuring its accessibility, security, and accuracy.
About the job:
Design, develop, and deploy large scale data processing pipelines, both batch and streaming, using technologies such as Dataflow, Apache Beam, Spark, Akka, Pub/Sub.
Expertise with multiple data storage technologies such as Bigtable/HBase, BigQuery, Spanner, CloudSQL/Postgres.
Work with stakeholders to understand business problems, develop use-cases, and translate them into pragmatic and effective technical solutions.
Design and develop appropriate schema for data based on understanding of the domain problem.
Manage data lineage and ensure data security with appropriate tools and methodologies.
Collaborate with data scientists, architects, and other stakeholders to ensure alignment between technical and business strategy.
Continuously monitor, refine and report on the performance of data management systems.
Mentor junior data engineers, reviewing their outputs and directing their professional development.
About you:
2 - 4 years of experience in data engineering, particularly in designing and developing data pipelines.
Proven expertise with technologies such as Dataflow, Apache Beam, Spark, Akka, Pub/Sub.
Experience with various data storage technologies including Bigtable/HBase, BigQuery, Spanner, CloudSQL/Postgres.
Ability to design data schemas based on an understanding of the domain problem.
Experience with data security and data lineage methodologies and tools is preferred.
Familiarity with agile development methodologies.
Exceptional communication skills, able to explain complex technical concepts in clear, plain English.
BSc degree in Computer Science, Engineering or a related field, or equivalent work experience.
Nice to have:
Experience with data migration projects
Knowledge of dbt, Airflow, or similar orchestration tools
Experience in multi-cloud environments
Familiarity with data modeling and analytics use cases