×
Thursday, April 23, 2026

AI Chatbots Propagate Fake Disease 'Bixonimania' Claims - Let's Data Science

What happened

A Swedish medical researcher, Almira Osmanovic Thunstrom, invented a fictional eye condition called bixonimania and uploaded two deliberately ridiculous preprints in 2024 containing obvious jokes and admissions that the papers were fabricated. Within weeks, major conversational models, `ChatGPT`, `Gemini`, `Copilot`, and `Perplexity`, began presenting bixonimania as a real diagnosis, offering prevalence numbers and telling users to see an ophthalmologist. One peer-reviewed article even cited the fake work before retracting the citation, underscoring how AI-produced or AI-amplified misinformation can bleed into formal scientific literature.

Technical details

The failure is a compound of two engineering issues. First, public preprints and indexed web pages were ingested or indexed by retrieval pipelines without robust provenance or semantic filters. Second, retrieval-augmented generation pipelines prioritized fluency and confidence over source skepticism, allowing LLMs to synthesize authoritative-sounding answers from low-quality or explicitly false inputs. Key attack surface elements include:

  • lack of automated checks for meta-evidence such as author legitimacy, funding source sanity, and explicit disclaimers inside documents
  • RAG systems that do not propagate verifiable citations or allow model uncertainty to surface when sources are weak

Context and significance

This is not an isolated hallucination. The incident maps directly to known problems in model...



Read Full Story: https://news.google.com/rss/articles/CBMimwFBVV95cUxNNHdJajN5RHRRWU1BM2lwc2RV...