At Netlify, we're building a platform to empower digital designers and developers to build better, more elaborate web projects than ever before. We're aiming to change the landscape of modern web development.
We recently raised $55M in Series C funding to bring forward the next generation of tooling for a more accessible web. Among our investors are Andreessen Horowitz, Kleiner Perkins, EQT Ventures as well as the founders of Figma, GitHub, Slack and Yelp. This latest round brings Netlify’s funding raised in total to $100M to date.
Netlify is a diverse group of incredible talent from all over the world. We’re ~43% woman or non-binary, and are composed of about a third as many nationalities as we are team members.
About the role:
As a Senior Data Engineer working on our critical data pipelines at Netlify, your contributions will have a huge impact on our burgeoning data infra efforts. You’ll design and build pipelines that will support critical business intelligence functions, help enable decision making around user-facing features, and empower our coworkers to experiment with and develop on top of our data.
Some of the things you'll do:
- Help to evolve and scale our data engineering platform, with an eye on future fit and growth.
- Work closely with the data science and engineering teams, as well as other stakeholders from our finance, sales, marketing, and product teams, to understand the data needs of the business and produce processes that enable a better product and support growth decision-making.
- Help evolve our CI/CD strategy for our ETL jobs and pipelines
- Develop a data retention strategy for different pipelines/sources and automate the implementation of it.
We're looking for someone who has experience with:
- Several years of hands-on experience building robust and scalable pipelines
- Designing efficient and maintainable schemas
- Using Python, SQL, and Spark for ETL tasks and building pipelines
- Use of CI/CD in a data engineering/ops setting
- RESTful API development
- Data infrastructure tools and systems like Airflow, Apache Beam, Kafka, or Spark
- Integrating with common data stores and data warehouses (e.g. MySQL, Mongo, Redshift, BigQuery)
- May have had some exposure to working with or managing R based ETL tasks or pipelines.
Within 1 month, you’ll…
- Learn about our existing data science platform by pairing with your team mates and perhaps by using some of e-learning resources supplied by our provider.
- Learn about our dev and data ops process and supporting tools.
- Have some one-on-one's and pairing sessions with some of the people you'll be working most closely with.
- Review the existing pipelines and how things are organized in our data lake and data stores.
- Start committing small quality of life improvements to pipelines as part of learning the shape of our data and how it flows through systems and processes.
- Start helping perform code reviews for new changes.
Within 3 months, you’ll…
- Be in the on-call rotation for the other data engineers and feel confident in your ability to handle most common issues (assuming they can't yet be automated away!) for your critical pipelines.
- Gained a solid understanding of our Data Science peers needs and skillsets so that we support them with data sources and schema's that enable them work efficiently.
- Rolled out your first few new pipelines to supply your partners in Data Science with a new clean data source.
- Work with your peers in data engineering to start rolling out a comprehensive dataOps strategy that help's us increase observability, reproducibility, and supports fast iteration.
Within 6 months, you’ll…
- Define and started to implement a data retention strategy for all of our "gold" or "tier 3" data. That fits into our overall data retention strategy.
- Implemented a comprehensive dataOps strategy for new pipelines.
- Worked with your peer's to put in place a framework and process so that we have solid monitoring and regression testing of existing pipelines at each stage of the ETL process.
- Start to mentor and coach other team members in Data Engineer and Data Science.
Within 12 months, you’ll…
- Evaluate new data stores, and tools as need to help us scale our ability to support Data Science or provide them with new tools. (i.e. help evaluate AutoML platforms, transitioning to different intermediate stores to decrease latency, etc).
- Curate and manage a catalog of clean "gold"/"tier 3" data sources that are resilient and tolerate change for your partners in Data Science to leverage.
- Drive and push for new or improved strategies to help scale our dataOps practice.
- Instill the need for increased automation and increased observability as a core tenant of our team for new peers.
Of everything we've ever built at Netlify, we are most proud of our team.
We believe that empowered, engaged colleagues do their best work. We’ll be giving you the tools you need to succeed and looking to you for suggestions to improve not just in your daily job, but every aspect of building a company. Whether you work from our main office in San Francisco or you are a remote employee, we’ll be working together a lot—paring, collaborating, debating, and learning. We want you to succeed! About 63% of the company are remote across the globe, the rest are in our HQ in San Francisco.
To learn a bit more about our team and who we are, make sure to visit our about page.
Not sure you meet 100% of our qualifications? Please apply anyway!
With your application, please include: A thoughtful cover letter explaining why you would enjoy working in this role and why you’d like to work at Netlify. A resume or short listing of job history. (A link to a LinkedIn profile would be fine.) Please note unfortunately at this time Netlify is unable to provide sponsorship for this role.
When we receive your complete application with the items above, we’ll get back to you about the next steps.