AI medical diagnoses may include fake health info – US study - Medical Brief
An alarming study has found that large language models like ChatGPT, while increasingly being used in healthcare, will accept fake medical claims if they are presented as realistic in medical notes and social media discussions, according to the researchers.
And AI is also unable to distinguish nuances when asked to diagnosis a medical symptom, warn experts, so you may not be getting the correct feedback, which poses all sorts of potential hazards.
Understanding the limitations of AI diagnoses
We’ve all been there – found a mysterious ache or unexplained rash that invariably would lead us straight to “Doctor Google”: you type in a mild cough and, three clicks later, you’re convinced you have a rare, tropical disease.
In the era of digital hypochondria, reports IOL, search results often led to more panic than peace of mind, but today, consultations have shifted from search bars to sophisticated AI tools like ChatGPT.
With their calm, authoritative tone and ability to process vast amounts of data in seconds, it’s tempting to treat these bots as a pocket-sized medical specialist.
However, the tech giants themselves are urging caution, with Google recently removing several AI-generated health summaries from its search results after investigations revealed inaccuracies in its responses.
Even OpenAI, the creator of ChatGPT, includes a firm disclaimer at the bottom of medical-related interactions: “ChatGPT can make mistakes. Check important info. This tool is not intended for...
Read Full Story: https://news.google.com/rss/articles/CBMilAFBVV95cUxPR3pRMEZ5T1lfclBQUy1xZ0My...