Why This Job is Featured on The SaaS Jobs
This Site Reliability Engineer role sits at the intersection of SaaS delivery and AI platform operations: keeping API-served, production inference reliable at scale. In the current SaaS landscape—where product value is increasingly delivered through model-backed endpoints—reliability work becomes a direct lever on customer experience, latency, and availability rather than a back-office function.
From a SaaS career perspective, the work builds durable skills in operating multi-tenant services with clear SLO ownership, deep observability, and automation-first operations. Experience with Kubernetes operators, GPU/accelerator-aware performance constraints, and capacity/cost management translates well across modern SaaS infrastructure teams, especially as more companies blend traditional web workloads with ML serving. The emphasis on self-service systems also maps to platform engineering trajectories common in mature SaaS orgs.
The role tends to suit engineers who like turning ambiguous operational problems into repeatable systems, and who are comfortable collaborating across infrastructure and product-facing teams. It’s a strong match for someone who wants responsibility for production outcomes (including on-call) and prefers work where reliability, performance tuning, and developer enablement are tightly coupled.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
Who are we?
Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.
Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
Why this role?
Are you energized by building high-performance, scalable and reliable machine learning systems? Do you want to help define and build the next generation of AI platforms powering advanced NLP applications? We are looking for a Site Reliability Engineer to join the Model Serving team at Cohere. The team is responsible for developing, deploying, and operating the AI platform delivering Cohere's large language models through easy to use API endpoints. In this role, you will work closely with many teams to deploy optimized NLP models to production in low latency, high throughput, and high availability environments. You will also get the opportunity to interface with customers and create customized deployments to meet their specific needs.
As a Site Reliability Engineer you will:
Build self-service systems that automate managing, deploying and operating services.
This includes our custom Kubernetes operators that support language model deployments.
Automate environment observability and resilience. Enable all developers to troubleshoot and resolve problems.
Take steps required to ensure we hit defined SLOs, including participation in an on-call rotation.
Build strong relationships with internal developers and influence the Infrastructure team’s roadmap based on their feedback.
Develop our team through knowledge sharing and an active review process.
You may be a good fit if you have:
5+ years of engineering experience running production infrastructure at a large scale
Experience designing large, highly available distributed systems with Kubernetes, and GPU workloads on those clusters
Experience with Kubernetes dev and production coding and support
Experience with GCP, Azure, AWS, OCI, multi-cloud on-prem / hybrid serving
Experience in designing, deploying, supporting, and troubleshooting in complex Linux-based computing environments
Experience in compute/storage/network resource and cost management
Excellent collaboration and troubleshooting skills to build mission-critical systems, and ensure smooth operations and efficient teamwork
The grit and adaptability to solve complex technical challenges that evolve day to day
Familiarity with computational characteristics of accelerators (GPUs, TPUs, and/or custom accelerators), especially how they influence latency and throughput of inference.
Strong understanding or working experience with distributed systems.
Experience in Golang, C++ or other languages designed for high-performance scalable servers).
If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply!
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.
Full-Time Employees at Cohere enjoy these Perks:
🤝 An open and inclusive culture and work environment
🧑💻 Work closely with a team on the cutting edge of AI research
🍽 Weekly lunch stipend, in-office lunches & snacks
🦷 Full health and dental benefits, including a separate budget to take care of your mental health
🐣 100% Parental Leave top-up for up to 6 months
🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement
🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend
✈️ 6 weeks of vacation (30 working days!)