About the Team
At OpenAI, our Fraud & Risk Operations team is central to protecting the platform and our users from abuse, fraud, scams, and account integrity threats. We support a diverse customer base—individual users, early-stage startups, and global enterprises—across ChatGPT, our API, and emerging products. Operating within our Support organization, we work closely with Product, Engineering, Legal, Policy, Go-to-Market, and other Operations teams to deliver a great user experience at scale while keeping both OpenAI and our users safe.
About the Role
We’re looking for experienced Fraud & Risk Analysts to collaborate with internal teams to ensure safety and compliance across OpenAI platforms. This role is pivotal in protecting users and the platform from financial threats, fraudulent activity, and scams (including account impersonation, and model-enabled deception). You will help design and implement systems to act on bad actors and minimize abuse at scale, handle high-risk, and sometimes high-visibility customer cases with care (including those involving scams targeting our users), and build feedback loops to improve our fraud and scam detection systems. You should bring strong data analysis and risk assessment skills, and be able to develop and implement fraud and scam-prevention strategies. Ideally, you’ve worked in a fast-paced startup environment; handled a broad range of financial-fraud, scam, and risk-related issues of varying sensitivity and complexity; and are comfortable digging into deep, domain-specific fraud problems while working directly with internal and external stakeholders.
Please note: This role will occasionally involve handling sensitive content, including material that may be highly confidential, sexual, violent, or otherwise disturbing.
This role is based in San Francisco, CA (this specific role will not be offered to other locations). We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
• Work directly with customers (and account teams as needed) to resolve complex fraud, scam, account integrity, and compliance issues
• Investigate accounts to identify and action fraudulent or scam-related activity (e.g., account takeovers, multi-account abuse, social engineering schemes), and surface patterns, root causes, and points of vulnerability
• Partner with Product, Engineering, Legal, Operations, and Vendor teams to develop, implement, and scale processes, tooling, and automation that balance user safety, fraud/scam loss, and customer experience
• Manage on-call tasks and perform risk evaluations by reviewing documentation, internal data, and third-party data to identify new fraud and scam trends, attack vectors, and emerging threat patterns
• Develop and contribute to scam-prevention strategies—such as in-product warnings, verification flows, user education, or signal improvements that reduce downstream victimization
• Build deep familiarity with OpenAI’s technology, model surfaces, and risk signals; translate product changes into updated fraud, scam, and account integrity controls
• Lead cross-functional, project-managed launches of new fraud/risk workflows for emerging products; set milestones, drive alignment, and ensure post-launch QA and iteration
• Represent and advocate for customer needs with product, policy, legal, design, and engineering partners within OpenAI, including surfacing scam vectors affecting user trust
• Equip other teams with best-in-class training, playbooks, and workflows, and help build deep technical understanding of our fraud, scam, and risk systems and users’ needs
You might thrive in this role if you:
• Are comfortable building from scratch - this role helps scale everything from policy enforcement to the tooling that enables it; technical and data skills are a plus
• Have 5+ years of experience and a demonstrated passion for risk evaluation, financial-fraud detection/investigation, scam prevention, compliance operations, or similar domains
• Bring excellent problem-solving skills and the ability to comprehend and communicate complex technical issues (e.g., multi-surface fraud rings)
• Bring strong project management skills—can structure ambiguous problem spaces, set clear milestones, and drive cross-functional delivery
• Thrive in ambiguity and enjoy launching new products, workflows, and enforcement programs at scale; comfortable iterating quickly as signals and policies evolve
• Have a humble attitude, eagerness to help others, and a desire to pick up whatever knowledge you’re missing to help your team and our customers succeed
• Operate with high horsepower: adept at frequent context switching, managing multiple projects with broad ownership, and ruthlessly prioritizing
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.