Powering the Future with AIDA
โ
โ
To lead the next phase of our AI evolution, weโve launched a new business unit AIDA โ Artificial Intelligence & Data Analytics โ a strategic engine driving our transformation designed to scale our AI ambitions with precision and purpose. This marks a pivotal shift in how we operate, innovate, and serve to embed intelligence into every layer of our business.
At Singtel, this is more than a technology upgrade. Itโs a strategic transformation that redefines how value is created across the enterprise coreโ augmenting human capabilities and unlocking entirely new potential. It is a transformation journey by aligning people, platforms, and processes under one cohesive strategy. Our mission is to build AI literacy, and foster a culture where intelligence empowers people.
We welcome you to join us on a transformational journey thatโs reshaping the telecommunications industry โ and redefining whatโs possible with AI at its core. Grow with us in a workplace that champions innovation, embraces agility, and puts human potential at the heart of everything we do.
โ
โ
โ
Be a Part of Something BIG! ย
- Responsible for building and supporting data ingestion and transformation pipelines in a modern hybrid cloud platform
- Develop basic batch and streaming pipelines, working with cloud tools such as Databricks and Kafka under the guidance of senior engineers
- Contribute to the delivery of reliable, secure, and high-quality data for analytics, reporting, and machine learning use cases
- Responsible for implementing knowledge base and retrieval-augmented generation (RAG) solution stack to support GenAI agentic use cases
โ
โ
Make An Impact By
- Build and maintain data ingestion pipelines for batch and streaming data sources using tools like Databricks and Kafka
- Perform data transformation and cleansing using PySpark or SQL based on business and technical requirements
- Monitor and troubleshoot data workflows to ensure data quality and pipeline reliability
- Work closely with senior data engineers to understand platform architecture and apply best practices in pipeline design
- Assist in integrating data from diverse source systems (files, APIs, databases, streaming)
- Help maintain metadata and pipeline documentation for transparency and traceability
- Participate in integrating pipelines with tools such as Microsoft Fabric, Databricks, Delta Lake, and other platform components
- Implement and operate data virtualization layer to centralize visibility and control of data across diverse sources
- Contribute to automation efforts using version control and CI/CD workflows
- Apply basic data governance and access control policies during implementation
โ
โ
Skills to Succeed
- Bachelorโs degree in Computer Science, Engineering, or a related field
- 1โ3 years of experience in data engineering or data platform development
- Proven ability to independently build basic batch or streaming data pipelines
- Hands-on experience with Python and SQL for data transformation and validation
- Familiarity with Apache Spark (especially PySpark) and large-scale data processing concepts
- Self-starter with strong problem-solving skills and a keen attention to detail
- Able to work independently while collaborating effectively with senior engineers and other stakeholders
- Strong documentation and communication skills