We are the movers of the world and the makers of the future. We get up every day, roll up our sleeves and build a better world -- together. At Ford, weโre all a part of something bigger than ourselves.
Want to help drive the future of automotive supply chain operations? Join Ford's Supply Chain Analytics team! As a Data Scientist, you'll work with a curious, proactive, and creative team that's critical to Ford's success. By leveraging advanced data analytics and the latest tools, you'll solve complex business problems, improve operational efficiency, and make data-driven decisions that impact the future of our company. If you're passionate, knowledgeable, and highly motivated, we want you on our team!
This is a hybrid position, with requirement to be in office 4 days a week.
โ
โ
Responsibilities
What you'll do...
- Drive improvements in Fordโs supply chain operations through big data analysis and modeling
- Develop innovative tools that translate complex business problems into actionable data science solutions for key stakeholders
- Leverage cutting-edge technologies such as Google Cloud Platform (GCP) to support more informed decision-making
- Execute digital transformation projects to improve supply chain operations
- Deliver tangible value for customers and stakeholders through data-driven insights and solutions
- Perform data exploration, feature engineering, and data preprocessing to prepare diverse datasets for model training and evaluation.
- Communicate complex analytical findings and technical concepts clearly and concisely to both technical and non-technical stakeholders.
- Stay abreast of the latest advancements in machine learning, deep learning, and LLM research, and actively propose new technologies and methodologies to enhance our capabilities.
โ
โ
Qualifications
You'll have...
- Bachelorโs degree in Data Science, Engineering, Computer Science, or other quantitative area
- 1+ years of experience as a researcher, analyst, data scientist or solution developer
- Experience with Data transformation and analysis: Python, SQL, R, Alteryx, MINITAB, CPLEX, Any Logic
- Experience with Visualization tools such as: QlikSense, Power BI, Google Looker studio, Dash, Tableau, Matplotlib, and Seaborn
- Experience ย with Databases technologies such as: Google Big Query, AWS, Hadoop, or SAP
- Ability to extract actionable insights from data
- Experience using SQL to extract, clean, and transform data in large, complex, nested databases
- Experience using programming languages such as Python or R in a cloud platform
- Experience doing research, analysis, data science or solution development
- Strong foundational knowledge of machine learning algorithms (e.g., supervised, unsupervised, reinforcement learning, deep learning) and statistical modeling techniques.
- Excellent written and verbal communication skills and strong intellectual curiosity with the ability to work effectively in a cross-functional team environment.
โ
โ
Even better, you may have...
- Working with and integrating LLMs and Generative AI frameworks: GPT, Claude, Google Gemini, Llama
- LLM orchestration frameworks: LangChain, LlamaIndex for building agentic workflows.
- Cloud-based Gen AI services: Google Cloud Vertex AI, Azure OpenAI Service, AWS Bedrock.
- Experience in statistical analysis and modeling
- Advanced degree in a related field
- Experience in the automotive industry
- Experience in at least one area of supply chain operations, such as constraints, risk management, sales and production planning, material planning, or logistics
- Experience developing and delivering projects in Google Cloud Platform (GCP)
- Proven experience in designing, developing, and deploying solutions leveraging Generative AI and Large Language Models (LLMs).
- Proficiency in advanced prompt engineering techniques, few-shot learning, and developing effective strategies for interacting with and optimizing LLM APIs.
- Experience with LLM agentic model development, including designing autonomous agents, multi-agent systems, and integrating tools for enhanced capabilities.
- Familiarity with Retrieval Augmented Generation (RAG) implementations, fine-tuning LLMs, and evaluating their performance for specific domain tasks.
โ