OpenAI retires models — this time with warning

Jan 30, 2026

8:33pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
O

penAI is pulling the plug on older models. This time, it’s giving users a two week notice and an explanation to avoid repeating past mistakes.

The AI firm announced its sunsetting of GPT‑4o, GPT‑4.1, GPT‑4.1 mini, and OpenAI o4-mini from ChatGPT on February 13. These models will join GPT‑5 (Instant and Thinking) in retirement, which was previously announced.

The decision to retire GPT-4o was a bold one: Last time the company did so, replacing it with GPT-5, it faced a ton of backlash from users who preferred that model and had established workflows using it, so much so the company had to bring it back.

As a result, this time, OpenAI provided justifications for the decision:

  • The feedback OpenAI received from preferring GPT-4o was taken into consideration when building GPT‑5.1 and GPT‑5.2, which boasted improvements to personality, customization, and creative ideation.
  • OpenAI shared that the broad variety of users gravitate to GPT-5.2, with only 0.1% of users still opting to use GPT-4o everyday.
  • OpenAI acknowledged that the transition may be frustrating for users but that it is committed to being clear about when changes will ensue.
  • It allows the company to build better experiences for users: “Retiring models is never easy, but it allows us to focus on improving the models most people use today,” the company said in a blog post.

OpenAI also shared a plan to continue to improve ChatGPT in areas requested the most by users. These updates will address requests such as improving the chatbot’s personality and creativity, and minimizing unnecessary refusals to help and “overly cautious or preachy” responses.


Our Deeper View

AI models are released at unprecedented pace, but maintaining them is resource-intensive, forcing companies to retire older versions. However, as OpenAI has learned, this must be done carefully as users build workflows around specific model capabilities, and even benchmark "upgrades" can introduce unwelcome changes. This raises an important question: should companies release fewer, more substantial updates instead? Longer model lifespans and transformative upgrades would make transitions clear no-brainers, rather than disruptive adjustments for marginal enhancements.