At Octopus Energy Services, we're at the forefront of the energy revolution, committed to accelerating the global transition to a sustainable future. The widespread adoption of low carbon technologies is critical to achieving net-zero emissions and combating climate change. Our mission is to make these innovative solutions not only environmentally friendly but also as cost-effective and accessible as possible, ensuring a greener, more affordable energy landscape for everyone.
As a Supply Chain and Cost Data Engineer, you will play a critical role in driving our business forward. You will work closely with our supply chain, finance, commercial and operations teams to provide cleaned, up-to-date information and data-driven insights that inform business decisions and improve our competitiveness and scalability. In the role, youβll work on some of the biggest challenges facing the organisation and leverage the latest technologies to answer these problems.
β
β
What You'll Do:
- Create, maintain and optimise data pipelines for various different streams of cost information
- Model a cost database using dbt and create semantic layers to inform AI enabled dashboards, reports, and visualisations
- Analyse and interpret data to track and improve commercial and operational performance
- Collaborate with supply chain and operations teams to identify opportunities for growth, optimisation and better service delivery.
- Use statistical analysis and modelling techniques to forecast and project business performance.
- Provide ad-hoc analysis and support as required.
β
β
What You'll Have:
- Essential skills
- A passion for sustainability and the drive to make a positive impact on the world.
- at least +2 years of experience building data pipelines, using Python and dbt
- Experience collaborating on codebases using Git and Github
- Strong analytical skills, commercial interest, and the ability to distinguish between what matters and what doesn't.
- Experience working in a fast-paced, dynamic environment.
- Comfortable manipulating and analysing data in a scripting language
- Proficient in writing robust, performant SQL queries
- Optional but desirable:
- Experience within supply chain or financial teams
β
β
The Data Tech Stack:
- Python as our main programming language
- DBT for data modelling
- Databricks as our datalake platform
- Kubernetes for data services and task orchestration
- Terraform for infrastructure
- Streamlit and Lightdash for data applications
- Airflow for job scheduling and tracking
- Circle CI for continuous deployment
- Parquet and Delta file formats on S3 for data lake storage
- Spark for data processing
- SparkSQL for analytics
β