OpenAI published the system card for GPT-5.5, its newest frontier model built for complex, multi-tool tasks ranging from coding and online research to document creation and data analysis.

The card, dated 23 April 2026 and updated the following day, describes the full battery of predeployment safety evaluations the lab ran before shipping the model to users.

What GPT-5.5 is designed to do

OpenAI positions GPT-5.5 as a step beyond earlier models in autonomy and persistence. According to the company, the model grasps tasks earlier in a conversation, requests less user guidance, wields tools more effectively, and self-checks its output before moving on.

Those capabilities matter because they push the model closer to genuine agentic work — chaining actions across browsers, code editors, and spreadsheets without constant human steering.

How OpenAI tested it

The lab subjected GPT-5.5 to its Preparedness Framework, which grades catastrophic-risk potential across four domains: cybersecurity, biological threats, persuasion, and model autonomy. Targeted red-teaming focused specifically on advanced cybersecurity and biology capabilities, two areas where more autonomous models raise sharper concerns.

Nearly 200 early-access partners provided feedback on real-world use cases before the public release, the company said.

Key safeguards and evaluation details from the card:

OpenAI describes the safety package as its strongest set of deployment guardrails to date, aimed at curbing misuse while preserving legitimate applications of the model's expanded capabilities.

GPT-5.5 Pro and parallel test-time compute

The system card also covers GPT-5.5 Pro, which runs the same underlying weights but uses parallel test-time compute to boost performance. OpenAI said it generally treats the base model's safety results as strong proxies for the Pro variant, but conducts separate evaluations where the additional compute setting could materially change the risk profile or required safeguards.

That distinction matters for API customers who may route heavier workloads through the Pro tier and need assurance that the extra inference budget does not open new attack surfaces.

The full system card is hosted on OpenAI's deployment-safety site. A companion programme, the GPT-5.5 Bio Bug Bounty, launched the same day, inviting external researchers to probe the model's biological-risk boundaries for cash rewards.