What to Expect
As a member of the Foundation Inference Infrastructure team, you will design & implement a diverse set of backend services and tools that power autonomy software and hardware development processes. The systems you build will have a large impact on autonomy, from accelerating chip design & Machine Learning workflows, compiler development, model validation, and evaluation & validation of the complete software-hardware stack. In this role, you'll bring top-notch software engineering skills and can contribute to our systems immediately. A strong candidate will be an excellent software generalist with passion for building scalable infrastructure and optimizing backend pipelines for ML inference workloads and hardware design automation.
What You’ll Do
- Design and implement backend services and tooling that handles iteration and batch processing of inference, simulation, and evaluation workloads
- Work closely with the other Autonomy teams to build foundational components as well as bridge missing pieces in ML compiler and runtime infrastructure while designing for scalability, reliability, security and high performance
What You’ll Bring
- Proficiency with Python & PyTorch
- Familiarity of managing hardware inference chips like TPUs and of optimizing machine learning inference workloads for low latency and scale
- Familiarity with Operating Systems concepts such as networking, processes, file systems and virtualization
- Familiarity with concurrency programming
- Familiarity with C++ and / or Golang
- Experience with Linux, container orchestrator like Kubernetes or similar and bare metal setup tools like Ansible or similar
- Experience with data stores like PostgreSQL and Redis
- Strong problem-solving mindset with the ability to navigate ambiguous requirements and break down complex technical challenges into actionable solutions
- Self-directed learning approach; comfortable diving into unfamiliar codebases, technologies, and problem domains to quickly become productive