You will be part of Lyftโs core Marketplace team, focusing on developing pipelines that support decision-making processes for supply, demand, finance and competitive data. Your contributions will help data scientists and business leaders make informed decisions that drive Lyftโs success.
โ
You will join a dynamic team responsible for data transport, collection, and storage systems while exposing services that prioritize data as a first-class citizen. Your work will involve proposing innovative ideas, evaluating multiple approaches, and implementing solutions based on fundamental principles and supporting data.
โ
You will take ownership of core data pipelines powering Lyftโs top-line metrics and collaborate with cross-functional teams to evolve data models and architectures. By architecting, building, and launching robust data pipelines, you will enable seamless access to insights that fuel critical functions such as Analytics, Data Science, and Engineering.
โ
โ
Responsibilities:
- Owner of core business data pipelines, responsible for scaling up data processing flow to meet the rapid data growth of a dynamic rideshare business
- Continuously evolve data models and schemas to meet business and engineering requirements
- Implement and maintain systems to monitor and enhance data quality and consistency
- Develop tools that support self-service management of data pipelines (ETL) and perform SQL tuning to optimize data processing performance
- Contribute to the Data Engineering teamโs technical roadmap, ensuring alignment with team and stakeholder goals
- Write clean, well-tested, and maintainable code, prioritizing scalability and cost efficiency
- Conduct code reviews to uphold code quality standards and facilitate knowledge sharing
- Participate in on-call rotations to maintain high availability and reliability of workflows and data pipelines
- Collaborate with internal and external partners to remove blockers, provide support, and achieve results
โ
โ
Experience:
- 3+ years of professional experience in data engineering or a related field
- Strong expertise in SQL and experience with Spark and/or PySpark
- Proficiency in a scripting language like Python or Bash
- Strong data modelling skills and a deep understanding of ETL processes
- Experience building and optimizing complex data models and pipelines
- Hands-on experience with workflow management tools (e.g., Airflow or similar)
- Familiarity with Trino SQL (for data quality checks) and a basic understanding of SQS
- Nice to have experience - working directly with cross-functional teams (data analytics, data science, engineering) to align data engineering solutions with business goals
โ