
β
β
You are in the final years of your PhD program and have implemented and published research on Large Language Models (LLMs) at major conferences in the field. You have experience combining LLMs with one of the following topics: uncertainty, calibration, reasoning, reward-based post-training, bayesian experimental design, active learning. We are an international group of researchers working on uncertainty, reasoning, and interpretability in LLMs. During your time with us, you will continue sharpening your research skills, as we go through the various collaborative stages of an ML research project in those areas. Identifying a promising research opportunity, reviewing SoTA methods and relevant literature, crafting novel approaches, implementing them as code prototypes, planning and running large-scale experiments across multi-node, multi-GPU systems, writing a paper, and seeing it through to submission. Youβll have the opportunity to collaborate with a local mentor in Paris and worldwide MLR colleagues on your project. Ultimately, you will work towards publishing the findings arising from the project, either or both as open source code and publications.
β
β
β
β
β