Senior Data Platform Engineer
Zipline · South San Francisco, CA
About Zipline
About You and The Role
Zipline’s Data team is responsible for powering data-driven decision making across the entire organization. In order for us to execute against our mission, we need to build a solid data foundation and ensure that every area of the business has access to highly reliable data.
We are hiring a talented and experienced Senior Data Engineer to join our small, but growing Data team and play a critical role in designing and executing on a robust and forward-looking data strategy for the company. Our team owns the data pipelines and tools that provide secure, reliable, and accessible data that enable team members to derive actionable insights. Doing our job well means that we enable the entire organization’s ability to make more informed decisions, innovate faster, and serve our customers better.
In this role, you will work directly with our Data, Engineering, Operations, Go-to-Market, and Finance teams to support the organization's data processing and analytics needs. You will be the internal expert on all things data engineering, empowering your peers with your expertise so that together, we can build a world-class data culture. This is a unique opportunity to directly influence not only our data systems, but also our drones and global operations. The ideal candidate will help us design systems that support the company’s needs today and many years into the future.
What You'll Do
- Help build, maintain, and scale our data pipelines that bring together data from various internal and external systems into our data warehouse.
- Partner with internal stakeholders to understand analysis needs and consumption patterns.
- Partner with upstream engineering teams to enhance data logging patterns and best practices.
- Participate in architectural decisions and help us plan for the company’s data needs as we scale.
- Adopt and evangelize data engineering best practices for data processing, modeling, and lake/warehouse development.
- Advise engineers and other cross-functional partners on how to most efficiently use our data tools.
What You'll Bring
- Advanced knowledge of Python & SQL
- Advanced proficiency with AWS cloud
- Experience working directly on a data-intensive product and maintaining business critical systems
- Experience building and maintaining data pipelines & ETL/ELT processes
- Experience with cloud data warehouses such as Snowflake and other OLAP databases
- Experience developing and maintaining customer-facing APIs
- Knowledge of cloud infrastructure best practices and experience working with continuous integration and continuous deployment processes and tools
- Ability to communicate effectively with stakeholders to define requirements and timelines
- Passion and excitement to serve as a technical mentor and thought leader and interest in evangelizing data engineering best practices
- Passion and excitement to build a strong data foundation for the company
- Nice to haves:
- Experience building streaming applications or pipelines using async messaging services or distributed streaming platforms like Apache Kafka
- Knowledge of Airflow or some other orchestration tool
- Experience with Spark or PySpark
- Experience with supply chain, manufacturing, and/or logistics datasets
- Knowledge of data governance best practices and experience working in a highly regulated industry
- Experience in schema design and dimensional data modeling, ideally using tools like dbt