OpenAI releases GPT-5.3-Codex with 400K context and cybersecurity warning
OpenAI released GPT-5.3-Codex, its most capable agentic coding model, featuring a 400K context window, 25% speed improvement, and the first 'High capability' cybersecurity designation under its Preparedness Framework.
OpenAI released GPT-5.3-Codex on February 5, 2026, its most capable agentic coding model to date. The model combines the coding performance of GPT-5.2-Codex with the reasoning capabilities of GPT-5.2, adds a 400,000-token context window, runs 25% faster than its predecessor, and carries a new "High capability" cybersecurity designation under OpenAI's Preparedness Framework -- the first time any OpenAI model has reached that threshold. A lighter variant, Codex-Spark, runs on Cerebras WSE-3 chips at over 1,000 tokens per second.
GPT-5.3-Codex arrives with two milestones that matter beyond raw benchmark scores. First, OpenAI says early versions of the model helped debug its own training -- the first time the company has described a model as instrumental in creating itself. Second, the Preparedness Framework designation signals that OpenAI's own safety reviewers consider the model's cybersecurity capabilities qualitatively different from anything it has shipped before, enough to warrant a formal public acknowledgment of unprecedented risk. For developers and organizations already deploying AI coding assistants, both data points carry practical weight: the model is more capable and, by its creators' own assessment, more dangerous in adversarial hands. GitHub Copilot integration reached general availability on February 9, widening access to the enterprise developer market immediately.
Stay informed. The best AI coverage, delivered weekly.