AI Chatbots Are Even Worse at Giving Medical Advice Than We Thought ...Middle East

News by : (Live Hacker) -

In the study, researchers first quizzed several chatbots about medical information. In these carefully conducted tests, ChatGPT-4o, Llama 3, and Command R+ correctly diagnosed medical scenarios an impressive 94% of the time—though they were able to recommend the right treatment a much less impressive 56% of the time.

The researchers then gave medical scenarios to 1,298 people, and asked them to use an LLM to figure out what might be going on in that scenario, plus what they should do about it (for example, whether they should call an ambulance, follow up with their doctor when convenient, or take care of the issue on their own).

As the researchers write, “Strong performance from the LLMs operating alone is not sufficient for strong performance with users.” Plenty of previous research has shown that chatbot output is sensitive to the exact phrasing people use when asking questions, and that chatbots seem to prioritize pleasing a user over giving correct information. 

The researchers analyzed chat logs to figure out where things broke down. Here are some of the issues they identified:

 The bots “generated several types of misleading and incorrect information.” Sometimes they ignored important details to narrow in on something else; sometimes they recommended calling an emergency number but gave the wrong one (such as an Australian emergency number for U.K. users).

People varied in how they conversed with the chatbot. For example, some asked specific questions to constrain the bot’s answers, but some let the bot take the lead. Either method could introduce unreliability into the LLM's output.

Overall, people who didn't use LLMs were 1.76 times more likely to get the right diagnosis. (Both groups were similarly likely to figure out the right course of action, but that's not saying much—on average, they only got it right about 43% of the time.) The researchers described the control group as doing "significantly better" at the task. And this may represent a best-case scenario: the researchers point out that they provided clear examples of common conditions, and LLMs would likely do worse with rare conditions or more complicated medical scenarios. They conclude: “Despite strong performance from the LLMs alone, both on existing benchmarks and on our scenarios, medical expertise was insufficient for effective patient care.”

Chatbots are a risk for doctors, too

ECRI, a medical safety nonprofit, put the misuse of AI chatbots in the number one spot on its list of health technology hazards of 2026. While the AI hype machine is trying to convince you to give ChatGPT your medical information, ECRI correctly points out that it’s wrong to think of these chatbots as having human personalities or cognition: “While these models produce humanlike responses, they do so by predicting the next word based on large datasets, not through genuine comprehension of the information.”

Even in situations that don’t seem like life-and-death cases, consulting a chatbot can cause harm. ECRI asked four LLMs to recommend brands of gel that could be used with a certain ultrasound device on a patient with an indwelling catheter near the area being scanned. It’s important to use a sterile gel in this situation, because of the risk of infection. Only one of the four chatbots identified this issue and made appropriate suggestions; the others just recommended regular ultrasound gels. In other cases, ECRI’s tests resulted in chatbots giving unsafe advice on electrode placement and isolation gowns. 

Clearly, LLM chatbots are not ready to be trusted to keep people safe when seeking medical care, whether you’re the person who needs care, the doctor treating them, or even the staffer ordering supplies. But the services are already out there, being widely used and aggressively promoted. (Their makers are even fighting in the Super Bowl ads.) There’s no good way to be sure these chatbots aren’t involved in your care, but at the very least we can stick with good old Dr. Google—just make sure to disable AI-powered search results. 

Hence then, the article about ai chatbots are even worse at giving medical advice than we thought was published today ( ) and is available on Live Hacker ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.

Read More Details
Finally We wish PressBee provided you with enough information of ( AI Chatbots Are Even Worse at Giving Medical Advice Than We Thought )

Last updated :

Also on site :

Most Viewed News
جديد الاخبار