Apply to role >
📍
Menlo Park, CA

Research Scientist Intern, AI & Compute Foundation - MTIA Software (PhD)

Internship
Consumer
Software Eng
September 26, 2025

Meta

Social network and technology platform
view website >

The MTIA (Meta Training & Inference Accelerator) Software team is part of the AI & Compute Foundation org. The team’s mission is to explore, develop and help productize high-performance software and hardware technologies for AI at datacenter scale. The team co-optimizes both SW (e.g., algorithms and numerics) and HW (e.g., platform and network) to come up with balanced system design. To develop new systems, requires understanding performance bottlenecks on existing systems. As a result, the team invests significantly into optimizing AI production models on existing systems. This has resulted in TCO wins for all key AI services.The team has been developing AI frameworks to accelerate Meta’s DL/ML workloads on the specialized MTIA AI accelerator hardware in a highly performant and flexible way. As part of the AI acceleration software stack, we develop kernel libraries exploiting various hardware architectural features, achieving high performance for our inference and training workloads.Our team at Meta offers twelve (12) to sixteen (16) weeks long internships and we have various start dates throughout the year.

Research Scientist Intern, AI & Compute Foundation - MTIA Software (PhD) Responsibilities

Development of Software stack with one of the following core focus areas: AI frameworks, compiler stack, high performance kernel development and acceleration onto next generation of hardware architectures

Contribute to the development of the industry-leading PyTorch AI framework core compilers to support new state of the art inference and training AI hardware accelerators and optimize their performance

Analyze deep learning networks, develop & implement compiler optimization algorithms

Collaborating with AI research scientists to accelerate the next generation of deep learning models such as Recommendation systems, Generative AI, Computer vision, NLP etc

Performance tuning and optimizations of deep learning framework & software components

Minimum Qualifications

Currently has, or is in the process of obtaining, PhD degree in the field of Computer Science or a related STEM field

C/C++ programming skills

Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment

Knowledge of Computer Architecture and Distributed systems with interest in one or more of High Performance Computing, Numerics, Performance and AI hardware including compute, networking and storage

Preferred Qualifications

OR AI Compiler: Experience with compiler optimizations such as loop optimizations, vectorization, parallelization, hardware specific optimizations such as SIMD. Experience with MLIR, LLVM, IREE, XLA, TVM, Halide is a plus

OR AI frameworks: Experience in developing training and inference framework components. Experience in system performance optimizations such as runtime analysis of latency, memory bandwidth, I/O access, compute utilization analysis and associated tooling development

OR AI high performance kernels: Experience with CUDA programming, OpenMP / OpenCL programming or AI hardware accelerator kernel programming. Experience in accelerating libraries on AI hardware, similar to cuBLAS, cuDNN, CUTLASS, HIP, ROCm etc

Experience working with frameworks like PyTorch, Caffe2, TensorFlow, ONNX, TensorRT

Knowledge of GPU, CPU, or AI hardware accelerator architectures

Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences

Demonstrated software engineer experience via an internship, work experience, coding competitions, or widely used contributions in open source repositories (e.g. GitHub)

Intent to return to degree-program after the completion of the internship/co-op

For those who live in or expect to work from California if hired for this position, please click here for additional information.