Trust Center

Source Methodology

How ChatGPT Disaster classifies sources, evaluates evidence strength, handles user reports, and updates AI failure documentation.

Every claim should be judged by source strength. This page explains how we classify evidence across the site.

Source Strength Ranking

  1. Primary documents: court records, regulatory filings, research papers, company status pages, official statements, and original datasets.
  2. Named institutional reporting: articles from recognized publications with clear authorship and editorial review.
  3. Expert commentary: named researchers, attorneys, clinicians, engineers, or policy specialists speaking within their expertise.
  4. User testimony: direct experiences from users, forums, Reddit, emails, or submissions. Useful for pattern detection but not treated as independently verified fact by itself.
  5. Editorial inference: conclusions drawn from multiple sources. These should be signaled as analysis.

User Stories

User stories are valuable because they show patterns and lived experience, but they can be incomplete, emotional, or disputed. We preserve them as accounts and avoid converting them into proven institutional facts unless independent documentation supports them.

Updates

AI products, lawsuits, and safety policies change quickly. Pages should be updated when new filings, research, status reports, or company statements materially change the record.

Editorial Standards and Source Transparency

ChatGPT Disaster documents AI failures, lawsuits, research, outages, and user-reported harms. We separate primary sources, court filings, peer-reviewed research, mainstream reporting, company statements, and user-submitted accounts so readers can judge the strength of each claim.

Trust Center About Editorial Policy Source Methodology Evidence Register Corrections AI Disclosure Affiliate Disclosure Newsletter Support Sponsor