OpenAI Retires Six Codex Models, Consolidates Lineup Around GPT-5.4
Six Codex models vanish from OpenAI's ChatGPT picker by April 14, giving engineering teams days to migrate production pipelines or fall back to API key access.

OpenAI's Codex model picker stopped displaying six models on April 7 and will remove them entirely from its ChatGPT-integrated interface on April 14, giving engineering teams that run the tool in production workflows less than two weeks to adapt. The six models being phased out are gpt-5.2-codex, gpt-5.1-codex-mini, gpt-5.1-codex-max, gpt-5.1-codex, gpt-5.1, and gpt-5.
The consolidated lineup that replaces them centers on four options: gpt-5.4, gpt-5.4-mini, gpt-5.3-codex, and gpt-5.2. ChatGPT Pro subscribers gain one additional choice in gpt-5.3-codex-spark. OpenAI positions gpt-5.4 as the default for most coding tasks, gpt-5.4-mini for throughput-sensitive workloads where speed and cost efficiency take priority, and the gpt-5.3-codex variants for tasks requiring specialized coding capabilities or ultra-fast interactive responses.
There is an immediate escape hatch: developers who still need any of the six retired models can bypass the ChatGPT-integrated picker by signing in with an OpenAI API key, or by configuring a model provider directly in Codex's config.toml file. That workaround preserves continuity for now, but it places the maintenance burden squarely on the engineering team rather than on the platform.
For product managers and platform engineers at companies running Codex inside CI/CD pipelines, IDE plugins, or internal developer tools, April 14 is the hard deadline. Any workflow that calls a deprecated model identifier without updating to one of the four supported options will break for ChatGPT sign-in users after that date. Regression testing, updated integration documentation, and latency and cost benchmarking against the replacement models should all be completed before then. Organizations with compliance or audit requirements face an added obligation: the migration path for any retained older-model usage must be documented and verified against existing security and governance controls.
The broader strategic signal is in what OpenAI chose to keep. GPT-5.4 incorporates capabilities previously associated with GPT-5.3-Codex, including long-horizon refactoring and multi-step tool usage, into a single frontier model. Reducing six variants to four reflects a deliberate bet that consolidation around more capable general-reasoning architectures will outperform a fragmented catalog of narrow Codex versions. Vendors offering compatibility layers or automated migration tooling are likely to see increased demand from teams that lack the runway to rebuild integrations on short notice.
The practical lesson for engineering leaders is architectural. API key portability and model-agnostic interfaces are no longer optional features in developer tooling. The next deprecation cycle will arrive, and teams without abstraction layers built in will face the same compressed timeline all over again.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

