Your role
We are hiring founding AI Systems Engineers to help build that machinery. This role is for engineers who like consequential junctions: between training outputs and deployable artifacts, between runtime systems and safe release, between quality claims and evidence, and between ambitious AI plans and systems that can actually carry them.This is not a research role, and it is not a generic support role. It is an implementation-heavy, building-focused engineering role on a small team responsible for making in-house AI capabilities easier to package, evaluate, deploy, promote, operate, and improve.
AI Platform Engineering exists to shorten the path from emerging AI capability to reliable production impact. We build the shared systems, standards, and delivery pathways that let in-house models and AI capability packages move from candidate state into observable, rollback-safe production operation. Our work sits at the junction between model development, runtime systems, evaluation, and delivery. We enable the broader AI Platform division by making it faster and safer to ship new capabilities, improve existing ones, and learn from production behavior.
This is a founding team. The systems, interfaces, and standards are still being shaped. The work is highly consequential, highly practical, and closely tied to the company’s broader AI strategy. We are not building one-off demos or isolated launches. We are building the machinery by which a growing AI organization can repeatedly deliver real capability into production.
What you’ll do
You will help design, build, and improve the systems that connect AI capability development to production reality.
Depending on your strengths, that may include work such as:
- Improving how model and capability artifacts are packaged, versioned, promoted, and rolled back.
- Building or improving deployment and release pathways for AI-backed services.
- Enabling shadow-serving, staged rollout, and candidate-versus-incumbent comparison.
- Strengthening runtime behavior, observability, and debugging for model-backed systems.
- Building or automating evaluation systems that make release decisions evidence-based.
- Reducing bespoke coordination and strengthening the shared rails used by multiple AI teams.
The exact balance will depend on your background and the team’s evolving needs. What will not vary is the mission: your work should make the broader AI Platform organization faster, safer, and more effective at turning in-house AI capability into production reality.
Skills you’ll bring
- Bachelor's degree in Computer Science, Engineering, or equivalent related experience.
- 2 to 6 years of professional software engineering experience, with a proven track record of shipping production infrastructure or real systems that matter.
- Experience in writing solid, maintainable production code and applying strong software engineering fundamentals to solve complex debugging challenges.
- Experience in operating within ambiguous, cross-functional environments where requirements evolve and interfaces are real.
- Expertise in building for reproducibility, operability, and rollout safety, focusing on the quality of change rather than just local implementation.
Nice to have
- Experience with cloud infrastructure, containerized environments, managed ML platforms, or service orchestration systems.
- Experience with model serving, deployment systems, experiment tracking, artifact/version management, or ML lifecycle tooling.
- Experience with distributed systems, service platforms, search/relevance systems, internal enablement tooling, or production AI platforms.
- Experience with testing, benchmarking, experimentation systems, or evaluation frameworks that informed release decisions.
- Exposure to applied AI, speech, conversational systems, customer-facing workflows, or other production ML domains.