KEY RESPONSIBILITIES
- Implement Python-based backend services (on FastAPI) for AI workloads, including LLM integration and generative AI pipelines
- Develop APIs and microservices for model serving and inference with guidance from senior team members
- Build and maintain AWS-based infrastructure components for AI workloads (e.g., Lambda, EKS)
- Write clean, well-tested, and maintainable code following established patterns and best practices
- Integrate LLMs and generative AI models into production systems using frameworks like LangChain, Langfuse, and litellm
- Implement monitoring, logging, and observability features for assigned backend services
- Participate in code reviews and contribute to team knowledge sharing
- Debug and resolve production issues with support from senior engineers
- Collaborate with cross-functional teams to understand requirements and deliver features
β
β
BASIC QUALIFICATIONS
- 2-4 years of professional software engineering experience or equivalent
- Strong proficiency in Python for backend development in production environments
- Understanding of LLMs, generative AI concepts, and modern AI frameworks (e.g., litellm, LangChain)
- Experience with AWS services (Lambda, API Gateway, EKS)
- Knowledge of RESTful API design and microservices architecture
- Familiarity with containerization (Docker) and version control (Git)
- Bachelor's degree in Computer Science, Engineering, or related field, or equivalent practical experience
- Strong problem-solving skills and attention to detail
β
β
PREFERRED QUALIFICATIONS
- Hands-on experience building APIs for AI/ML workloads in production
- Experience with FastAPI or similar Python web frameworks
- Familiarity with Langfuse for LLM observability
- Knowledge of CI/CD practices and automated testing
- Understanding of database systems (SQL and NoSQL)
- Experience with event-driven architectures
- Strong communication skills and ability to work collaboratively in a team environment
Work location assignment : Remote
β