Health and Human Services Secretary Robert F. Kennedy Jr. told Congress that AI is already speeding up drug approvals, managing healthcare data “perfectly securely” and trimming fat from sprawling agencies.
The US Food and Drug Administration unveiled Elsa, a generative AI tool that was supposed to slash the time for drug and medical device approvals. The agency leadership trumpeted it as a breakthrough. But inside the FDA, Elsa has been met with scepticism, frustration and outright alarm.
Six current and former FDA officials told CNN that Elsa is fine for drafting emails, generating meeting notes and spitting out bland templates. But when it comes to science, the thing is a liability. It has confidently fabricated studies, misrepresented real research and generally proved useless for serious drug and device reviews.
One FDA employee told CNN: “Anything that you don’t have time to double-check is unreliable. It hallucinates confidently.”
Another complained it wastes more time than it saves, adding: “I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have.”
Elsa cannot even help with the FDA’s core review work because it lacks access to critical industry documents. It cannot tell how many times a company has filed for approval or list their existing products. Despite FDA Commissioner Marty Makary boasting Elsa would transform drug approvals, it can’t answer the most basic questions.
The FDA claimed in June that Elsa was already “accelerating clinical protocol reviews” and identifying high-priority inspection targets. When pressed, Makary admitted most scientists only use Elsa for organisational grunt work like finding studies or summarising meetings.
Jeremy Walsh, the FDA’s head of AI, conceded that Elsa “can hallucinate nonexistent studies” like any other large language model. He insists updates will let staff upload their own document libraries to improve its usefulness.
Makary stressed: “They don’t have to use Elsa if they don’t find it to have value. You have to determine what is reliable information that you can make major decisions based on, and I think we do a great job of that.”
Elsa was a rushed AI push that began under the Biden administration. Its name originally stood for “Efficient Language System for Analysis,” but FDA leadership eventually dropped the acronym.
During a demo for CNN, Elsa looked like any off-the-shelf chatbot. Ask it about guidance on fatty liver disease and it digs up papers from an internal FDA library. When it launched in June, Makary boasted it came “ahead of schedule and under budget.” But FDA staffers say adoption has been weak, and those who do use it encounter glaring errors.
One employee said Elsa summarised 20 pages of research into a single paragraph but missed key details a human reviewer would never overlook. Another asked it how many drugs with a certain label were approved for children and it got the answer wrong. When told it was incorrect, Elsa apologised but still couldn’t provide the right number.
Walsh says new features like clickable document citations and training will fix some of these problems. But those only apply to internal FDA documents, not medical journals or other crucial external sources. And knowing what’s important still depends entirely on how users phrase their questions.
Despite the ongoing mess, Elsa is being positioned as part of a bigger AI push inside Trump’s second administration, where tech is being deployed at breakneck speed with barely any oversight. Europe has already enacted strict AI safeguards under its AI Act, but the US has no equivalent. Biden’s attempt to craft AI regulation fizzled out, and Congress remains paralysed by competing interests.
Stanford University assistant professor Jonathan Chen, who studies AI in medicine, is less impressed. “AI does a lot of stuff, but it’s not magic,” he said. It might help sniff out dodgy data or speed up analysis, but some problems are much more than what a chatbot can handle.
FDA staff are left with a glitchy bot that confidently makes things up while they scramble to double-check everything it says. Or, as one employee put it, Elsa is mostly just an apologetic time sink pretending to be the future.