Job Description
β
β
Overview of the role:
Keysight AI Labs is looking for PhD students currently pursuing Machine Learning/LLM-related studies to join our AI R&D team in Barcelona for a 6-month R&D internship. This position is open to students across seniority levels, with preference to experienced, PhD candidates. If selected, you will contribute to the development of advanced ML systems supporting strategic Keysight initiatives using Large Language Models, Agentic AI workflows and other advanced ML pipelines for different problems and products. The role combines research, engineering, and productization of ML technologies in a collaborative and fast-paced environment. Therefore, domain specific knowledge and experience with Keysight's tools and business will be preferred.
β
βResponsibilitiesβ
- Collaborate with Keysight engineering experts (RF, 6G-wireless, EM, circuit, measurement, etc.) to gather domain requirements, physics constraints, T&M workflows, and other elements necessary for ML/LLM pipeline design.
- Design and implement SOTA ML architectures, including classic ML / generative AI / LLMs / Agentic architectures, GANs, diffusion models, RAG systems, etc., for data filtering, augmentation, modeling, root cause analysis, automated scripting, anomaly detection and other classic or new ML problems.
- Develop scalable ML pipelines for on-device, on-prem, on-cloud, hybrid, multi-GPU environments, focusing on efficiency, throughput, reliability, scalability, etc.
- Write good quality code in Python, C++, and CUDA following best coding practices.
- Apply CI/CD practices, code testing, documentation, and performance profiling.
- Work with product teams to integrate ML/AI-driven pipelines and tools into Keysightβs commercial platforms.
- Stay ahead of SOTA ML, LLMs, agentic and gen AI research, bringing new methods into Keysight workflows.
- Contribute to the R&D Keysight AI Labs internal and external knowledge sharing efforts via publications, invention disclosures, blog posts, etc.
β
βQualificationsβ
- Pursuing a PhD in Applied Mathematics, Scientific Computing, Computer Science, Electrical Engineering, Telecommunications, or related discipline.
- Publications in top ML conferences (NeurIPS, KDD, ICML, ICLR, etc.) or other related conferences and journals.
- Strong ML/DL Foundations: Deep understanding of neural networks, statistics, optimization, and model evaluation metrics.
- Proficiency in ML Frameworks: Strong skills in PyTorch (preferred) or TensorFlow.
- Experience with building multimodal, LLM-powered, RAG/agentic-based models, pipelines and applications.
- Strong experience or expertise with Transformer Architectures: Hands-on experience training or fine-tuning large transformer-based models (e.g., GPT, T5, LLaMA, OSS, etc.).
- Experience with small LM architectures, fine-tuning, edge or on-device running is a plus.
- Experience with LLM pretraining and fine-tuning and/or instruction tuning.
- Experience with building scalable data preprocessing and tokenization pipelines for large text corpora.
- Experience with performance optimization, knowledge of model compression, quantization, and inference optimization techniques.
- Experience with MLflow, Weights & Biases, etc., is a plus.
- Experience with writing production-quality python code, testing, CI/CD, and version control (Git) is a plus.
- Experience with evaluation and benchmarking of LLM models (HELM, MT-Bench, EvalHarness) is a plus.
- Familiarity with cloud platforms (Azure, AWS, GCP) and containerization (Docker, Kubernetes) is a plus.
- Cross-Functional Collaboration: Ability to communicate and collaborate with researchers, engineers, and product teams.
- Research Literacy: Ability to read, reproduce, and extend recent ML/LLM research papers; open-source contributions are a plus.
- Ability to propose and evaluate novel architectures and solutions under ambiguity.
- Strong communication skills and ability to articulate complex ideas clearly in English.
- Interest in team culture and collaborative problem-solving.
β