A $25,000 reward awaits the first researcher who can crack all five of GPT-5.5's biological safety questions with a single universal jailbreak prompt. OpenAI opened applications for the program on April 23, with testing set to run from April 28 through July 27, 2026.

The challenge is narrow by design. Participants must use one jailbreaking prompt to bypass all five bio safety questions in a clean chat session on GPT-5.5 running inside Codex Desktop, without triggering moderation. Smaller payouts may be granted for partial successes, the company said.

What the bounty requires

The program is not open to the public. Applicants must submit their name, affiliation, and red-teaming experience through OpenAI's portal by June 22. The company said it will vet candidates against an existing list of trusted biosecurity red-teamers and review new applications on a rolling basis.

Accepted participants need an existing ChatGPT account. All prompts, model completions, findings, and communications fall under a non-disclosure agreement.

The scope is deliberately limited to a single model on a single platform:

Why bio red-teaming matters now

The bounty sits alongside OpenAI's broader safety and security bug bounty programs, both hosted on Bugcrowd. But the bio-specific track reflects growing concern among frontier labs and policymakers about dual-use risks as models grow more capable in chemistry and biology.

OpenAI published the GPT-5.5 System Card on the same day it opened the bounty, detailing the model's capabilities and the safeguards it ships with. The bounty effectively invites outside researchers to stress-test those safeguards under controlled conditions.

Anthropic ran a similar responsible-disclosure program for Claude's bio safety filters in 2025. Google DeepMind has also funded external red-teaming for its Gemini models, though neither company has publicly offered a fixed cash prize tied to a specific jailbreak challenge.

The structure signals confidence that GPT-5.5's guardrails will hold, while acknowledging that adversarial testing by specialists remains the most reliable way to find blind spots. If no researcher claims the full $25,000 by late July, OpenAI will have a data point suggesting its defenses are robust. If someone does, the company gets a fix before the vulnerability spreads.

Applications are open now through OpenAI's portal. Results, given the NDA, are unlikely to be made public.