Build and iterate LLM/GenAI-powered features and internal tools under guidance, focusing on web apps and game team workflows.
Integrate AI features via REST/GraphQL APIs and SDKs; wire up frontend components and backend endpoints; demo and document for users.
Write and maintain tests (unit/integration/E2E) and simple evaluation scripts to verify quality, latency, and guardrails.
Add basic observability (logs, metrics) and help triage bugs and incidents with senior engineers.
Collaborate with designers, artists, and engineers to gather requirements, run small experiments, and incorporate feedback quickly.
Contribute to CI/CD and code quality (linting, formatting, code reviews) for AI-assisted tools and services.
โ
โ
Qualifications
Must Have
Bachelorโs degree in Computer Science, Software Engineering, or related field, or equivalent hands-on experience.
2โ4 years of software engineering experience in either web application/fullโstack or video game application development; you identify as a development engineer (not primarily a data engineer/scientist).
Hands-on experience building applications that use LLM/GenAI services (e.g., calling GPT-class APIs, basic prompt/response handling, context assembly, simple retrieval), ideally shipped to real users.
Verification mindset: experience writing unit/integration tests, setting up CI, and validating functional/performance/latency requirements for tooling or web services.
Basic cloud/devops skills: containerization (Docker), deploying to a cloud platform (AWS/GCP/Azure/Vercel), and adding logs/metrics/alerts.
Clear communication and documentation skills; ability to explain AI tool behavior and limitations to non-technical stakeholders.
โ
โ
Nice to Have
Exposure to basics of RAG, embeddings, vector databases, and prompt engineering for production use.
Familiarity with TypeScript/Node.js for tooling, or C# for engine/editor plugins.
Experience with cloud services (AWS/GCP/Azure), containerization (Docker), and infrastructure-as-code for deploying AI tooling.
Interest in model evaluation, offline/online A/B testing, and telemetry-driven iteration.