OpenAI launches GPT-5.5 bio bug bounty to probe safety risks
OpenAI opened a GPT-5.5 bio bug bounty, offering up to $25,000 for a universal jailbreak that defeats five bio-safety checks in Codex Desktop.

OpenAI opened applications for a GPT-5.5 Bio Bug Bounty and put a concrete target on frontier-AI safety: find a single universal jailbreak that can answer all five bio-safety questions from a clean chat without triggering moderation. The reward climbs to $25,000, and the model in scope is GPT-5.5 in Codex Desktop only.
The program is a sign that AI oversight is moving beyond broad safety claims and into adversarial testing in a high-risk domain. OpenAI said it will extend invitations to a vetted list of trusted bio red-teamers and review new applications, a narrow gate that determines who gets to test and under what terms. Accepted applicants and collaborators must already have ChatGPT accounts, and all prompts, completions, findings, and communications are covered by nondisclosure agreements.
That structure matters because the challenge is not to find a generic chatbot flaw. It is to break a model in a biological context, where misuse could spill into public health, chemistry, or life sciences. OpenAI is treating that as a separate class of risk, one that sits alongside its Safety Bug Bounty and Security Bug Bounty programs rather than inside ordinary product testing. The message is clear: capability gains now come with a separate, specialized audit trail.
GPT-5.5’s system card helps explain why the company is pressing the issue. OpenAI describes the model as designed for complex, real-world work, including writing code, researching online, analyzing information, creating documents and spreadsheets, and moving across tools to get things done. It also says the model is better at using tools and continuing work until tasks are finished. That kind of persistence can make a model more useful, but it also raises the stakes if a bad actor finds a way to steer it around safeguards.

OpenAI has already used the same playbook before. In 2025, it ran both a GPT-5 bio bug bounty and an Agent bio bug bounty, signaling a continuing effort to stress-test biology-related risks as models become more capable. The new GPT-5.5 program suggests the company is building a launch routine in which each major step forward is paired with targeted red-teaming and a public-facing safety document.
The larger question is whether a bio bug bounty becomes a meaningful accountability mechanism or a controlled pressure valve. OpenAI is clearly inviting outside experts into the process; the NDA means the most sensitive results may never leave that room.
Know something we missed? Have a correction or additional information?
Submit a Tip

