Jump to content

AI medical advice on drugs could lead to death in one in five cases


Dr AI could be providing 'potentially harmful' medication advice, a concerning study has suggested. 

German researchers found more than a fifth of AI powered chatbot answers to common prescription drug questions could 'lead to death or severe harm'. 

Experts urged patients not to rely on such search engines to give them accurate and safe information. 

Medics were also warned against recommending the tools until more 'precise and reliable' alternatives are made available. 

In the study, the scientists from the University of Erlangen-Nuremberg, pinpointed the 10 most frequently asked patient questions for the 50 most prescribed drugs in the US.

These included adverse drug reactions, instructions for use and contraindications — reasons why the medication should not be taken. 

Using Bing copilot — a search engine with AI-powered chatbot features developed by Microsoft — researchers assessed all 500 responses, against answers given by clinical pharmacists and doctors with expertise in pharmacology. 

Responses were also compared against a peer-reviewed up-to-date drugs information website. 

They found chatbot statements didn’t match the reference data in over a quarter (26%) of all cases and were fully inconsistent in just over 3%. 

But further analysis of 20 answers also revealed four in ten (42%) were considered to lead to moderate or mild harm and 22%, death or severe harm. 

Read full story

Source: Mail Online, 11 October 2024

0 Comments


Recommended Comments

There are no comments to display.


Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.