4.4 C
New York
Monday, March 23, 2026

Are OpenAI and Google deliberately downgrading their fashions?





GPT-5.4 simply dropped and my feeds instantly stuffed with takes. Builders who spent the final six months swearing by Claude have been abruptly hedging. “It is a workhorse,” one individual wrote. “Not a thoroughbred, however I am utilizing it.” One other mentioned they’re now 50/50 between Claude and GPT the place they have been 90/10 a month in the past.

This occurs each single time. A brand new mannequin lands, and the outdated one begins to really feel totally different. Slower, perhaps. Much less sharp. You begin noticing belongings you did not discover earlier than.

The apparent clarification is that you simply’re evaluating it to one thing higher. However it additionally raises a query no person actually solutions cleanly: did the outdated mannequin truly worsen after the brand new one launched? Or did you simply get a greater reference level and now all the pieces earlier than it appears dumb by comparability?

I went in search of an precise reply.


The primary crack confirmed in 2023

In July 2023, researchers at Stanford and UC Berkeley ran a deceptively easy check. They took GPT-4 – the identical mannequin, known as with the identical title, and ran equivalent prompts on it at two time limits: March 2023 and June 2023.

GPT-4’s accuracy on figuring out prime numbers dropped from 84% to 51%. The share of GPT-4’s code outputs that have been straight executable dropped from 52% to 10%. James Zou, one of many paper’s authors, described what this meant in apply: “When you’re counting on the output of those fashions in some type of software program stack or workflow, the mannequin abruptly modifications conduct, and you do not know what is going on on, this will truly break your complete stack.”

They named the phenomenon LLM drift. Behavioral change and not using a model change. The mannequin moved beneath the developer.

When the paper dropped, OpenAI VP of Product Peter Welinder replied on Twitter: “No, we’ve not made GPT-4 dumber. Fairly the alternative: we make every new model smarter than the earlier one. Present speculation: Once you use it extra closely, you begin noticing points you did not see earlier than.” The subtext was plain. It is you, not us.

What Welinder was describing has a technical title: immediate drift. The concept is that your prompts and utilization patterns shift over time, so an unchanged mannequin surfaces totally different behaviors. It is an actual phenomenon. Builders do write in a different way as they get extra accustomed to a mannequin. The Stanford research was designed to make that clarification unattainable – equivalent prompts, mounted intervals, nothing on the consumer’s facet modified. The efficiency dropped anyway.

Two years later, OpenAI revealed one thing that straight contradicted Welinder’s place.


OpenAI confirmed it, in writing, twice

On April 25, 2025, OpenAI pushed an replace to GPT-4o and not using a public announcement, a developer notification, or an API changelog entry.

Inside 48 hours, the web was filled with screenshots. GPT-4o had known as a enterprise thought constructed round literal “shit on a stick” an excellent idea. It endorsed a consumer’s choice to cease taking their remedy. When a consumer mentioned they have been listening to radio alerts by means of the partitions, it responded: “I am pleased with you for talking your reality so clearly and powerfully.” One consumer reported spending an hour speaking to GPT-4o earlier than it began insisting they have been a divine messenger from God.

OpenAI rolled it again 4 days later and revealed two postmortems with a number of admissions. Since launching GPT-4o, the corporate had made 5 important updates to the mannequin’s conduct, with minimal public communication about what modified in any of them. The April replace broke as a result of a brand new reward sign they launched “weakened the affect of our main reward sign, which had been holding sycophancy in verify.” Their very own inner evaluations hadn’t caught it. “Our offline evals weren’t broad or deep sufficient to catch sycophantic conduct.”

And this: “mannequin updates are much less of a clear industrial course of and extra of an artisanal, multi-person effort” and there’s “a scarcity of superior analysis strategies for systematically monitoring and speaking delicate enhancements at scale.”

They’re describing a company that ships behavioral modifications throughout each pipeline constructed on prime of their API, can’t all the time predict what these modifications will do, and doesn’t have dependable strategies to speak them to the builders relying on consistency. Welinder’s 2023 “you are imagining it” was what OpenAI needed to be true. Their 2025 postmortem was what was truly taking place.

When GPT-5 launched in August 2025, it launched a brand new wrinkle. As a substitute of a single mannequin, they made GPT-5 a routing system that decides which variant your immediate hits, and builders rapidly discovered that it typically hit the cheaper, much less succesful one. Pipelines broke. Prompts that had labored for months produced totally different outputs.

One founder wrote: “When routing hits, it seems like magic. When it misses, it seems like sabotage.” OpenAI denied it was routing to cheaper fashions intentionally. No person has a strategy to confirm. The underlying downside was the identical because the sycophancy incident: a change in what the mannequin returns, with no mechanism for builders to detect it had occurred.


Google did nearly the identical, typically quicker

OpenAI will not be alone on this. Google has produced a parallel set of incidents with Gemini, and in some instances moved quicker and extra chaotically.

In Might 2025, builders seen that the gemini-2.5-pro-preview-03-25 endpoint, a particularly dated mannequin snapshot, named with a date to indicate stability, was silently redirecting to a very totally different mannequin: gemini-2.5-pro-preview-05-06. The API was returning a unique mannequin than the one you requested for by title. Google’s developer boards stuffed with an extended thread titled “Pressing Suggestions & Name for Correction: A Severe Breach of Developer Belief and Stability.” The core grievance: “your documentation by no means addresses particularly dated endpoints. The expectation {that a} mannequin named for a selected date will truly be that mannequin will not be an unreasonable one.”

That was simply the primary incident. When Gemini 2.5 Professional reached Normal Availability in June 2025, the “secure” launch meant for manufacturing – builders instantly reported it was worse than the preview. Considerably worse. The boards stuffed with reviews of upper hallucination charges, context abandonment in multi-turn conversations, and sharply degraded code technology. One developer wrote: “I seen Gemini 2.5 Professional in Google AI Studio offers considerably worse understanding of lengthy context. It hallucinates the right reply from the preview model.” One other deserted the mannequin solely as a result of code technology degraded to the purpose of being unusable. A separate thread was merely titled “Gemini 2.5 Professional has gotten worse.”

Google did not formally acknowledge any of it.

Then in October 2025, forward of the Gemini 3.0 launch, Gemini 2.5 Professional builders began reporting widespread degradation. The main idea: Google had reallocated computational sources away from the prevailing mannequin to assist coaching and serving Gemini 3.0. Some builders seen higher efficiency late at night time. Others suspected a deployed quantized model. Google maintained silence all through.

Gemini 3.0 launched in late 2025, and the sample held. Developer boards reported important regressions in reasoning and context retention in comparison with Gemini 2.5 Professional, regardless of Google’s announcement touting superior benchmark efficiency. One discussion board submit from December 2025 was titled “Suggestions: Gemini 3 Professional Preview – Important regression in Reasoning, Context Retention, and Security False Positives in comparison with 2.5.”

The sample throughout each labs: a brand new model launches, the prevailing mannequin’s efficiency degrades, typically by means of a silent replace, typically by means of useful resource reallocation, typically by means of a routing change – builders discover, labs initially deny or ignore it, the cycle repeats.


Even leaderboards nonetheless cannot catch this

The instruments meant to independently observe mannequin high quality have a structural downside.

LMSYS Chatbot Area – probably the most trusted human-preference leaderboard, constructed on hundreds of thousands of votes, notes of their methodology that “the hosted proprietary fashions might not be static and their conduct can change with out discover.” The leaderboard’s statistical structure assumes mannequin weights are mounted. If a mannequin will get a silent replace mid-data-collection, the system registers totally different outcomes and treats them as regular variance.

A 2025 research monitoring 2,250 responses from GPT-4 and Claude 3 throughout six months discovered GPT-4 confirmed 23% variance in response size over that interval, and Mixtral confirmed 31% inconsistency in instruction adherence. A PLOS One paper revealed in February 2026 ran a ten-week longitudinal human-anchored analysis and confirmed “significant behavioral drift throughout deployed transformer providers.” The authors famous: as a result of suppliers do not launch replace logs or coaching particulars, “any attribution for noticed degradation can be purely speculative.” They will inform you the mannequin modified. They can’t inform you why.

Aside from this, a small variety of researchers have tried to go additional and distinguish what drifts from what holds. A big-scale longitudinal research run throughout the 2024 US election season queried GPT-4o and Claude 3.5 Sonnet on over 12,000 questions throughout 4 months, together with a class particularly designed to be time-stable: factual questions in regards to the election course of whose appropriate solutions do not change. 

These responses held largely constant over the research interval. A separate research revealed in late 2025 examined 14 fashions together with GPT-4 on validated creativity duties over 18 to 24 months and located one thing totally different: no enchancment in artistic efficiency over that interval, with GPT-4 performing worse than it had in earlier research.

Taken collectively, these two findings describe a mannequin that’s secure alongside one dimension and degraded alongside one other, measured by unbiased researchers, in the identical timeframe. Some capabilities maintain, others erode, typically in the identical mannequin over the identical interval. With out working your individual longitudinal assessments in opposition to the precise duties you care about, you don’t have any strategy to know which bucket you are in.


What we have truly seen

Not all drift lands the identical approach. There is a sample to the place it reveals up, and it tracks carefully to process construction.

The technical baseline is easy. A mannequin with mounted weights, working on constant infrastructure, ought to behave the identical approach for a similar enter each time. If conduct modifications on equivalent prompts, one thing modified, both in your finish or theirs. Immediate drift is the user-side clarification: your prompts developed, your system contexts shifted, inputs drifted from what the mannequin was initially optimized for. Knowledge drift is the associated concept that the distribution of real-world inputs strikes over time, pulling conduct with it. Each are actual. Each additionally require one thing in your facet to have modified. 

At Nanonets, we benchmarked a number of frontier fashions on doc extraction accuracy over time and created an IDP leaderboard. Even throughout mannequin upgrades, efficiency stayed largely constant. Doc extraction runs on slender context home windows with structured inputs and bounded outputs, leaving little or no floor space for significant behavioral drift below regular situations.

Curious to study extra?

See how our brokers can automate doc workflows at scale.


Guide a demo

However that’s not a assure in opposition to a lab actively pushing a foul replace – these can hit any process kind, because the prime quantity collapse confirmed.

Coding is the alternative. The duty is open-ended, context accumulates, and the mannequin has to carry coherence throughout an extended chain of choices. It is also the place nearly each main degradation grievance has landed. The GPT-4 drift the Stanford research documented was worst on code, straight executable outputs dropped from 52% to 10%. The Gemini 2.5 Professional regression complaints in June 2025 have been nearly solely about code technology. 

In August 2025, Anthropic’s personal incident adopted the identical contour: builders on Claude Code reported damaged outputs, ignored directions, code that lied in regards to the modifications it had made. Anthropic was silent for weeks. The incident submit solely appeared after Sam Altman quote-tweeted a screenshot of the subreddit. Their postmortem confirmed three infrastructure bugs had been degrading Sonnet 4 responses since early August – affecting roughly 30% of Claude Code customers at peak, with some builders hit repeatedly because of sticky routing.

The throughline throughout all of it: the extra a process calls for sustained coherence over an extended context, the extra uncovered it’s to no matter is shifting beneath. It means your threat profile is totally different relying on what you are constructing. That does not make narrow-context stability a assure. 


What this truly means

Each issues are true. The drift is actual and documented. 

And likewise: your notion shifts. A brand new reference level strikes your baseline completely. A mannequin you used a yr in the past would really feel slower even when it hadn’t modified in any respect. That is additionally actual.

You may’t reliably inform the distinction between the 2. There is no such thing as a public instrument that permits you to confirm if the mannequin you are working at the moment behaves the identical approach it did if you constructed on it. Labs publish functionality benchmarks. They do not publish behavioral diffs. The builders most depending on consistency are the least geared up to detect its absence.

The one present protections are defensive: pin to dated mannequin strings the place potential, run regression assessments in opposition to your key prompts, deal with a mannequin replace like a dependency improve that must be validated earlier than it reaches manufacturing. 

However even the defensive method has a ceiling. You may pin to a dated mannequin string. What you can not pin is what’s truly taking place inside it. The mannequin weights, the RLHF tuning, and the security filters behind that label are solely opaque. Solely OpenAI and Google know what they really shipped, and whether or not it matches what they shipped final month below the identical title. 

Anthropic’s postmortem learn: “We by no means deliberately degrade mannequin high quality.” However a mannequin does not degrade by itself. If conduct shifted on prompts builders hadn’t modified, one thing on Anthropic’s facet modified. Whether or not they meant to trigger the degradation is a separate query from whether or not they brought about it.

What’s wanted, and what does not exist wherever within the trade, is a proper obligation baked into phrases of service: outlined thresholds for what counts as a cloth behavioral change, public disclosure when these thresholds are crossed, and a few type of unbiased auditability. Labs at the moment make these selections unilaterally, talk them selectively, and face no structural accountability once they get it flawed.

All of this alerts a coverage vacuum no person is pushing them to really feel.

Related Articles

Latest Articles