Why This Job is Featured on The SaaS Jobs
This AI Systems Engineer role sits at a core layer of modern AI SaaS: the “agent harness” that turns model outputs into safe, reliable actions in real environments. Rather than focusing on a single product surface, the work spans orchestration, sandboxed execution, evaluation, and production reliability, which are increasingly central concerns as SaaS products embed autonomous or semi-autonomous workflows.
Career-wise, the role offers unusually broad exposure to the operational realities of shipping LLM-powered capabilities. It touches the full loop from experimentation and ablations through to observability, latency and cost control, and incident-driven hardening, all of which map directly to how AI features are maintained at scale. Experience building measurement and debugging systems for agent behavior also translates across AI-first SaaS teams where quality and trust are product differentiators.
The strongest fit is for engineers who like working across layers and are comfortable forming hypotheses from messy production evidence. It will suit someone who wants proximity to research while still owning production systems, and who enjoys turning ambiguous model and system failures into concrete primitives that other engineers can build on.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
AI Systems Engineer - Codex Core Agents
About The Team
The Codex Core Agents team builds the agent harness that turns model capability into real-world action. We own the systems around the model: prompting and interpreting model outputs, executing actions safely in real environments, and feeding production experience back into better models and better agent behavior.
This team sits close to research and works across the stack: harness, model interaction, inference, sandboxed execution, orchestration, evals, production reliability, and the performance envelope around tokens, latency, cost, capacity, and quality. The harness is open source and increasingly part of how models are trained and evaluated, making this one of the highest-leverage layers in Codex.
About The Role
We’re looking for engineers to build the AI systems that make Codex agents dependable in production. The ideal candidate is an agent-systems builder: hands-on across low-level systems and ML workflows, able to debug Codex behavior end to end across the harness, model behavior, inference/runtime stack, GPU fleet, and product surface.
You’ll work with research, infrastructure, and product to design agent harness capabilities, run experiments and ablations across the model + system prompt + harness stack, build frameworks for assessing production agent performance, and turn messy failures into durable improvements.
What You’ll Do
Design and build the core agent harness and execution loop that lets Codex agents interpret model outputs, use tools, execute code, and complete long-horizon tasks safely.
Build sandboxing, isolation, orchestration, state, and workflow infrastructure for agents operating in real development environments.
Develop evaluation, experimentation, and debugging systems that distinguish harness issues, model behavior, inference/runtime issues, and product failures.
Run ablations across prompts, model-facing interfaces, context construction, tool-use strategies, and harness behavior to improve solve rate, reliability, latency, and cost.
Improve observability, profiling, and diagnostics across the agent stack, from backend systems to inference, GPUs, and fleet capacity.
Work closely with research to make the harness trainable, measurable, and useful for improving frontier agentic models.
Build shared primitives that make Codex faster, safer, more reliable, and easier for other teams and open-source users to build on.
You Might Be A Good Fit If You
Have built or operated production systems in distributed systems, infrastructure, developer tooling, sandboxing, virtualization, cloud platforms, or ML systems.
Enjoy working across layers: Rust systems code, Python configuration layers, APIs, agent orchestration, evals, logs/traces, inference behavior, runtime constraints, and user outcomes.
Have hands-on experience with LLM applications, coding agents, evals, model deployment, inference, compiler/runtime performance, or developer platforms.
Care deeply about reliability, safety, performance, debuggability, and clean abstractions.
Can debug from evidence and move quickly from ambiguous production failures to practical, durable fixes.
Want to work close to research while still shipping changes to production
Still write meaningful code, show strong ownership, and can lead scoped or multi-team AI systems work.
Bonus Points
Deep Rust, systems, sandboxing, isolation, or low-level platform experience.
Experience with coding agents, agent harnesses, tool-using LLM systems, model evals, or post-training feedback loops.
Background in compilers, kernels, runtimes, inference optimization, GPU systems, benchmarking, profiling, or performance engineering.
Experience building production infrastructure used by many engineers or users under demanding reliability and security constraints.
Open-source infrastructure or developer-platform work with strong taste for APIs and usability.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.