Senior Data Engineer
Everlane · Remote (USA)
At Everlane, we want the right choice to be as easy as putting on a great T-shirt. That’s why we partner with ethical factories around the world. Work with high quality and more sustainably sourced materials. And share the true cost of every product we make. But there's a lot more work to be done, and we're excited to be growing a team of motivated humans that are up for the challenge.
Everlane is looking for a Senior Data Engineer to develop some of our most complex data products and own the evolution of our data platform. In addition to shipping code that processes some of our largest and most complex data sets, this person will also advise on the technical architecture for data products across our ecosystem. They will also ensure that our data platform is innovating to support business priorities, thinking through aspects of resiliency, scale, speed, and cost.
This is a full-time remote position and will report to our Director of Data.
- Design and ship data pipelines impacting internal analytics, logistics, ecommerce, and marketing.
- Maintain the infrastructure and tooling powering our analytics and data products, including but not limited to our data warehouse (Redshift/Snowflake), Airflow, and Fivetran.
- Help architect systems (infrastructure, pipeline design, etc.) that deal with complex data flows and/or large datasets. Examples might be product recommendations, event streaming, or ERP systems.
- Act as an empathetic mentor to other engineers and analysts, especially around data engineering best practices and toolkits.
- Manage data engineering contractors and provide guidance around their priorities and development.
- Proactively identify areas of opportunity to improve various aspects (resiliency/scale/speed/cost) of our data platform in order to support upcoming business objects. Work with your leader to get these items prioritized and shipped.
We'd love to hear from you if you have:
- 5+ years experience as a data engineer working with a modern tech stack
- Expertise in using SQL and Python to process large scale datasets
- Proficiency in batch processing technologies such as Spark and Airflow
- Track record of architecting data processing systems in a cloud environment (we’re an AWS shop)
- Track record of designing analytical data models for unstructured datasets such as web events or inventory transactions
- Mindset for data quality, especially around using technology to automate problems of data quality
Nice to have:
- Exposure to data streaming technologies such as Firehose and Kafka
- Experience with additional scripting languages such as Ruby and R
- Comfort working with Docker and/or Kubernetes
- Experience either standing up a data warehouse from scratch, or migrating to a new data warehouse
Please note: We are only accepting applications from those who file their taxes in one of the following states: California, Florida, Illinois, Kansas, Massachusetts, Minnesota, North Carolina, New Hampshire, New Jersey, New York, Oregon, Pennsylvania, Tennessee, Texas, Virginia, and Washington.