Why This Job is Featured on The SaaS Jobs
This role stands out in the SaaS ecosystem because it sits at the infrastructure layer that increasingly underpins AI-enabled software products: operating and scaling the compute fabric used for frontier model training. Rather than typical cloud-only platform work, it blends distributed systems engineering with data-center realities—bare-metal provisioning, firmware/OS lifecycle, and multi-cluster abstractions—where reliability directly determines whether downstream product and research workloads can run.
For a long-term SaaS career, this kind of systems ownership builds durable leverage. Experience with Kubernetes at massive scale, automation-first operations, and observability tied to concrete performance metrics translates to any SaaS organization running high-availability platforms, internal developer infrastructure, or latency-sensitive services. The emphasis on reducing restart and upgrade times also signals exposure to the optimization mindset that mature SaaS platforms require as usage and cost profiles intensify.
The position is best suited to engineers who like being close to production reality and treating operations as an engineering problem. It will appeal to those who enjoy diagnosing complex failures across hardware, networking, and orchestration layers, and who prefer building tooling and abstractions that remove toil. It also fits professionals who want depth in infrastructure fundamentals while staying aligned with modern cloud-native patterns.
The section above is editorial commentary from The SaaS Jobs, provided to help SaaS professionals understand the role in a broader industry context.
Job Description
About the Team
The Frontier Systems team at OpenAI builds, launches, and supports the largest supercomputers in the world that OpenAI uses for its most cutting edge model training.
We take data center designs, turn them into real, working systems and build any software needed for running large-scale frontier model trainings.
Our mission is to bring up, stabilize and keep these hyperscale supercomputers reliable and efficient during the training of the frontier models.
About the Role
We are looking for engineers to operate the next generation of compute clusters that power OpenAI’s frontier research.
This role blends distributed systems engineering with hands-on infrastructure work on our largest datacenters. You will scale Kubernetes clusters to massive scale, automate bare-metal bring-up, and build the software layer that hides the complexity of a magnitude of nodes across multiple data centers.
You will work at the intersection of hardware and software, where speed and reliability are critical. Expect to manage fast-moving operations, quickly diagnose and fix issues when things are on fire, and continuously raise the bar for automation and uptime.
In this role, you will:
Spin up and scale large Kubernetes clusters, including automation for provisioning, bootstrapping, and cluster lifecycle management
Build software abstractions that unify multiple clusters and present a seamless interface to training workloads
Own node bring-up from bare metal through firmware upgrades, ensuring fast, repeatable deployment at massive scale
Improve operational metrics such as reducing cluster restart times (e.g., from hours to minutes) and accelerating firmware or OS upgrade cycles
Integrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructure
Develop monitoring and observability systems to detect issues early and keep clusters stable under extreme load
You might thrive in this role if you:
Have deep experience operating or scaling Kubernetes clusters or similar container orchestration systems in high-growth or hyperscale environments
Bring strong programming or scripting skills (Python, Go, or similar) and familiarity with Infrastructure-as-Code tools such as Terraform or CloudFormation
Are comfortable with bare-metal Linux environments, GPU hardware, and large-scale networking
Enjoy solving fast-moving, high-impact operational problems and building automation to eliminate manual work
Can balance careful engineering with the urgency of keeping mission-critical systems running
Qualifications
Experience as an infrastructure, systems, or distributed systems engineer in large-scale or high-availability environments
Strong knowledge of Kubernetes internals, cluster scaling patterns, and containerized workloads
Proficiency in cloud infrastructure concepts (compute, networking, storage, security) and in automating cluster or data center operations
Bonus: background with GPU workloads, firmware management, or high-performance computing
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.