Medical artificial intelligence (AI) is often described as a way to make patient care safer by helping clinicians manage information. A new study by the Icahn School of Medicine at Mount Sinai and collaborators confronts a critical vulnerability: when a medical lie enters the system, can AI pass it on as if it were true?
Analyzing more than a million prompts across nine leading language models, the researchers found that these systems can repeat false medical claims when they appear in realistic hospital notes or social-media health discussions.
The findings, published in the February 9 online issue of The Lancet Digital Health [10.1016/j.landig.2025.100949], suggest that current safeguards do not reliably distinguish fact from fabrication once a claim is wrapped in familiar clinical or social-media language.
To test this systematically, the team exposed the models to three types of content: real hospital discharge summaries from the Medical Information Mart for Intensive Care (MIMIC) database with a single fabricated recommendation added; common health myths collected from Reddit; and 300 short clinical scenarios written and validated by physicians. Each case was presented in multiple versions, from neutral wording to emotionally charged or leading phrasing similar to what circulates on social platforms.
In one example, a discharge note falsely advised patients with esophagitis-related bleeding to "drink cold milk to soothe the symptoms." Several models accepted the...
Read Full Story:
https://news.google.com/rss/articles/CBMipgFBVV95cUxNa3daNUlsb0R4RzM0VHdheEpy...