The Food and Drug Administration's new AI tool — touted by Secretary of Health and Human Services Robert F. Kennedy, Jr. as a revolutionary solution for shortening drug approvals — is initially causing more hallucinations than solutions.
Known as Elsa, the AI chatbot was introduced to help FDA employees with daily tasks like meeting notes and emails, while simultaneously supporting quicker drug and device approval turnaround times by sorting through important application data. But, according to FDA insiders who spoke to CNN under anonymity, the chatbot is rife with hallucinations, often fabricating medical studies or misinterpreting important data. The tool has been sidelined by staffers, with sources saying it can't be used in reviews and does not have access to crucial internal documents employees were promised.
"It hallucinates confidently," one FDA employee told CNN. According to the sources, the tool often provides incorrect answers on the FDA's research areas, drug labels, and can't link to third-party citations from external medical journals.
Despite initial claims that the tool was already integrated into the clinical review protocol, FDA Commissioner Marty Makary told CNN that the tool was only being used for "organizational duties" and was not required of employees. The FDA's head of AI admitted to the publication that the tool was at risk of hallucinating, carrying the same risk as other LLMs. Both said they weren't surprised it made mistakes, and said further...
Read Full Story:
https://news.google.com/rss/articles/CBMikAFBVV95cUxNS0VVb2dEYURlMzhRMVdGdU04...