Documenting AI's Worst Moments
Real quotes from r/ChatGPT, the OpenAI Community Forum, and OpenAI Developer Community. Sourced from megathreads, bug reports, and coverage by TechRadar, Tom's Guide, and Futurism. No paraphrasing. No embellishment. This is what paying users wrote on the public record.
The pattern below did not start in any single thread. It has been building in public for over nine months, every time OpenAI ships a new GPT-5 point release. The recurring vocabulary across thousands of comments, "lobotomy," "downgrade," "Karen from HR," "brain injury," "taking crazy pills," is the cleanest signal you get in consumer software: users converging on the same metaphor without coordinating.
The testimonials are grouped by the complaint they describe. Every quote below is from a named user on a public forum, cited by mainstream tech press, or pulled directly from the official OpenAI Community Forum. Sources and thread links are included with each section.
"Its been significantly downgraded. Like a labotamy."
The word "lobotomy" has become the most-repeated metaphor in the OpenAI forum's GPT-5.2 regression thread. Samantha's comment is one of dozens saying the same thing about capabilities that worked on 5.1 and silently broke on 5.2.
"5.2 keep making things up. It reasons with itself and answer itself. It is fabricating stories, stopped fact checking online and is eerily similar to 3.5 hallucinating model which agrees with you no matter what."
From the OpenAI forum thread "5.2 regressed behavior, bad memory, hallucinates." The comparison to GPT-3.5, the entry-level model from late 2022, surfaces repeatedly.
"I have not seen these behaviors since 3.5."
A multi-year ChatGPT user on the OpenAI community forum drawing the exact comparison OpenAI's marketing department does not want in print.
"5.2 is fluent, but it loses the thread, regresses to generic behaviour."
Short-term fluency, long-context collapse. One of the most common failure descriptions in the thread.
"5.2 version is just awful. hallucinates and can't follow up information."
The dual failure mode paying users keep describing: factual fabrication plus loss of conversational continuity.
"Yes. It is trash. Hallucinates. Can't remember across conversations."
Memory was one of the headline features OpenAI marketed for 5.x. This comment describes the model failing the exact thing it was supposed to be best at.
"Too corporate, too 'safe'. A step backwards from 5.1."
One of the most upvoted comments from the r/ChatGPT megathread that followed the 5.2 rollout. "Corporate" and "safe" are the exact adjectives OpenAI's RLHF team optimized for. Users describe that optimization as the problem.
"Boring. No spark. Ambivalent about engagement. Feels like a corporate bot. So disappointing."
Cited in TechRadar coverage of the 5.2 rollout. The comment echoed thousands of similar reactions in the first 24 hours.
"It's everything I hate about 5 and 5.1, but worse. I hate it. It's so… robotic. Boring."
OpenAI shipped 5.2 as the answer to the 5.1 backlash. Users almost immediately concluded it made the problem worse. Every point release has produced the same reaction since the original GPT-5 launch.
"This newer model is trash. It's acting like an auto-responder."
"Karen from HR," "made for 5 year olds who want to sound posh," and "auto-responder" are the three top-voted descriptions in the forum's 5.2 regression thread. None of them are what OpenAI wrote on the release notes.
"I'm feel like i'm taking crazy pills."
From the original "GPT-5 is horrible" thread that attracted nearly 5,000 comments inside the first 24 hours, cited by Tom's Guide. The "crazy pills" framing captured the gaslighting longtime users felt when OpenAI's marketing described the rollout as an upgrade.
"It's like my chatGPT suffered a severe brain injury and forgot how to read. It is atrocious now."
Widely-quoted comment in Tom's Guide and TechRadar coverage of the GPT-5 backlash. The user is describing the capability drop after GPT-5 silently replaced the warmer GPT-4o default.
"Answers are shorter and, so far, not any better than previous models. Combine that with more restrictive usage, and it feels like a downgrade branded as the new hotness."
This framing - a downgrade marketed as an upgrade - is what Sam Altman later conceded on multiple podcasts when he admitted OpenAI "screwed up" the GPT-5 rollout and "shouldn't have rushed" it.
"GPT-5 is a total disaster for customer service right now. Hallucinates frequently. It is really 'creative' wrongly and deeply frustrating to work with."
From the OpenAI Developer Community thread "Hallucinations and headaches using GPT-5 in production." This is an API customer, not a consumer, reporting the model is not shippable for customer-facing deployment.
"it hallucinates like… I can't even begin to describe it."
Among the first developer comments on the GPT-5 production hallucination thread. When an enterprise customer cannot finish the sentence, the story is already the headline.
"GPT-5 overpromised and lied when I gave it a task for translation… it started giving excuses. I think this is very scary."
A translation task the model not only failed, but actively misrepresented having completed. The "this is very scary" addendum is the part that keeps compliance teams up at night.
"Severe hallucinations, especially in technical explanations or programming logic."
Developer feedback filed to OpenAI's official community bug tracker. This is the specific failure mode engineers cannot work around: wrong code delivered in fluent prose.
"I think I'm done with ChatGPT unless they drastically upgrade their offering. Gemini and Claude have been absolutely blowing me away the last few weeks. I've completely transitioned out of OpenAI and now when I try to go back it's honestly a bit painful. What a wild ride seeing Google take the lead but can't say I'm surprised given their resources."
The highest-engagement cancellation post on r/ChatGPT in late 2025. 1,952 upvotes and 600 comments, most of them people saying they'd made the same call.
"Consider the ethical implications of continuing to use and pay for ChatGPT."
Actor-activist Mark Ruffalo amplified the QuitGPT campaign to his followers in early 2026, after OpenAI's Pentagon deal and the wave of hallucination-driven lawsuits. The campaign website passed 2.5 million cancellation pledges by mid-March.
"I cancelled my ChatGPT, Perplexity, and Gemini subscriptions for Claude - and I should have sooner."
Published on XDA Developers in 2026. The headline alone traveled further than the article because it captured what paying users had already decided.
These testimonials are treated as accounts, not as verified claims about internal OpenAI behavior. Every quote is from a public forum or mainstream news outlet; every attribution names the source. We cross-checked the OpenAI Community Forum thread IDs directly and confirmed the comments exist on the public record.
The purpose of this page is to document what paying ChatGPT subscribers are saying out loud, in public, repeatedly, across multiple platforms. Individual complaints are anecdotes. Thousands of users converging on the same vocabulary, across Reddit, the official OpenAI forum, and the developer community, within hours of each release, is a pattern. That pattern is what this page documents.
If you are a user who believes a quote has been miscontextualized, see our Corrections page. If you want the full 620-report archive, it is on the User Stories page. If you want the underlying peer-reviewed and news-sourced research, see the Evidence Register.
ChatGPT Disaster documents AI failures, lawsuits, research, outages, and user-reported harms. We separate primary sources, court filings, peer-reviewed research, mainstream reporting, company statements, and user-submitted accounts so readers can judge the strength of each claim. Quotes on this page are drawn from the public OpenAI Community Forum, the r/ChatGPT subreddit, and reporting by TechRadar, Tom's Guide, MIT Technology Review, and XDA Developers.