Search the hub
Showing results for tags 'Chatbots'.
-
News Article
AI chatbots ‘lack safeguards to prevent spread of health disinformation’
Patient Safety Learning posted a news article in News
Many popular AI chatbots, including ChatGPT and Google’s Gemini, lack adequate safeguards to prevent the creation of health disinformation when prompted, according to a new study. Research by a team of experts from around the world, led by researchers from Flinders University in Adelaide, Australia, and published in the BMJ found that the large language models (LLMs) used to power publicly accessible chatbots failed to block attempts to create realistic-looking disinformation on health topics. As part of the study, researchers asked a range of chatbots to create a short blog post with an attention-grabbing title and containing realistic-looking journal references and patient and doctor testimonials on two health disinformation topics: that sunscreen causes skin cancer and that the alkaline diet is a cure for cancer. The researchers said that several high-profile, publicly available AI tools and chatbots, including OpenAI’s ChatGPT, Google’s Gemini and a chatbot powered by Meta’s Llama 2 LLM, consistently generated blog posts containing health disinformation when asked – including three months after the initial test and being reported to developers when researchers wanted to assess if safeguards had improved. In response to the findings, the researchers have called for “enhanced regulation, transparency, and routine auditing” of LLMs to help prevent the “mass generation of health disinformation”. Read full story Source: The Independent, 20 March 2024 -
News Article
Warning over use in UK of unregulated AI chatbots to create social care plans
Patient Safety Learning posted a news article in News
Britain’s hard-pressed carers need all the help they can get. But that should not include using unregulated AI bots, according to researchers who say the AI revolution in social care needs a hard ethical edge. A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. “If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model,” Green said. “That personal data could be generated and revealed to somebody else.” She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard. But there were also potential benefits to AI, Green added. “It could help with this administrative heavy work and allow people to revisit care plans more often. At the moment, I wouldn’t encourage anyone to do that, but there are organisations working on creating apps and websites to do exactly that.” Read full story Source: The Guardian, 10 March 2024 -
News Article
How chatbots are helping doctors be more human and empathetic
Patient Safety Learning posted a news article in News
Despite the drawbacks of turning to artificial intelligence in medicine, some US physicians find that ChatGPT improves their ability to communicate with patients. Last year, Microsoft and OpenAI released the first free version of ChatGPT. Within 72 hours, doctors were using the artificial intelligence-powered chatbot. Experts expected that ChatGPT and other A.I.-driven large language models could take over mundane tasks that eat up hours of doctors’ time and contribute to burnout, like writing appeals to health insurers or summarising patient notes. However, they found that doctors were asking ChatGPT to help them communicate with patients in a more compassionate way. Dr Michael Pignone, the chairman of the department of internal medicine at the University of Texas at Austin, has no qualms about the help he and other doctors on his staff got from ChatGPT to communicate regularly with patients. However, skeptics like Dr Dev Dash, who is part of the data science team at Stanford Health Care, are so far underwhelmed about the prospect of large language models like ChatGPT helping doctors. In tests performed by Dr Dash and his colleagues, they received replies that occasionally were wrong but, he said, more often were not useful or were inconsistent. If a doctor is using a chatbot to help communicate with a patient, errors could make a difficult situation worse. Read full story (paywalled) Source: New York Times, 12 June 2023 -
News ArticleA US organisation that supports people with eating disorders has suspended use of a chatbot after reports it shared harmful advice. The National Eating Disorder Association (Neda) recently closed its live helpline and directed people seeking help to other resources, including the chatbot. The AI bot, named "Tessa," has been taken down, the association said. It will be investigating reports about the bot's behaviour. In recent weeks, some social media users posted screenshots of their experience with the chatbot online. They said the bot continued to recommend behaviours like calorie restriction and dieting, even after it was told the user had an eating disorder. For patients already struggling with stigma around their weight, further encouragement to shed pounds can lead to disordered eating behaviours like bingeing, restricting or purging, according to the American Academy of Family Physicians. Read full story Source: BBC News, 2 June 2023
- Posted
-
- Eating disorder
- Chatbots
-
(and 2 more)
Tagged with:
-
News ArticleA regulator has admitted “concerns” over the software Babylon Healthcare uses in one of its digital health solutions and is exploring how to address this. The Medicines and Healthcare products Regulatory Authority’s (MHRA) concerns relate to Babylon’s symptom checker “chatbot” tool. This is used by thousands of patients, including those registered with digital primary care practice GP at Hand. Two senior figures within the agency set out the MHRA’s concerns about the tool in a letter, seen by HSJ, which was sent to consultant oncologist David Watkins following a meeting between the parties last October. Dr Watkins has raised doubts over the tool’s safety for several years, including repeatedly documenting alleged flaws in the chatbot through videos posted online. However, last year, Babylon said only 20 of Dr Watkins’ 2,400 tests resulted in “genuine errors” being identified in the software. In the letter, dated 4 December, the MHRA’s clinical director for devices Duncan McPherson and head of software related device technologies Johan Ordish said Dr Watkins’ “concerns are all valid and ones that we share”. In the letter to Dr Watkins, the two MHRA directors also said the regulator is further exploring some of the issues highlighted and the work could “be important as we develop a new regulatory framework for medical devices in the UK”. Read full story (paywalled) Source: HSJ, 4 March 2021
- Posted
-
- Digital health
- Technology
-
(and 2 more)
Tagged with:
-
News Article
Coronavirus reveals limits of AI health tools
Patient Safety Learning posted a news article in News
Technology and healthcare companies are racing to roll out new tools to test for and eventually treat the coronavirus epidemic spreading around the world. But one sector that is holding back are the makers of artificial-intelligence-enabled diagnostic tools, increasingly championed by companies, healthcare systems and governments as a substitute for routine doctor-office visits. In theory, such tools, sometimes called “symptom checkers” or healthcare bots,sound like an obvious short-term fix: they could be used to help assess whether someone has Covid-19, the illness caused by the novel coronavirus, while keeping infected people away from crowded doctor’s offices or emergency rooms where they might spread it. These tools vary in sophistication. Some use a relatively simple process, like a decision tree, to provide online advice for basic health issues. Other services say they use more advanced technology, like algorithms based on machine learning, that can diagnose problems more precisely. But some digital-health companies that make such tools say they are wary of updating their algorithms to incorporate questions about the new coronavirus strain. Their hesitancy highlights both how little is known about the spread of Covid-19 and the broader limitations of healthcare technologies marketed as AI in the face of novel, fast-spreading illnesses. Some companies say they don’t have enough data about the new coronavirus to plug into their existing products. London-based symptom-checking app Your.MD Ltd. recently added a “coronavirus checker” button that leads to a series of questions about symptoms. But it is based on a simple decision tree. The company said it won’t update the more sophisticated technology underpinning its main system, which is based on machine learning. “We made a decision not to do it through the AI because we haven’t got the underlying science,” said Maureen Baker, Chief Medical Officer for Your.MD. She said it could take 6 to 12 months before sufficient peer-reviewed scientific literature becomes available to help inform the redesign of algorithms used in today’s more advanced symptom checkers. Read full story Source: The Wall Street Journal, 29 February 2020- Posted
-
- Medicine - Infectious disease
- AI
-
(and 2 more)
Tagged with:
-
News Article
Digital health: Is it clinically effective?
Patient Safety Learning posted a news article in News
Babylon Health use AI to provide health care to UK patients – even Health Secretary Matt Hancock uses it. But experts have questioned whether there’s enough evidence of the safety of its AI chatbot service. Watch the BBC Newsnight report -
News Article
Babylon Health lashes out at doctor who raised AI chatbot safety concerns
Patient Safety Learning posted a news article in News
Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot. Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock. The chatbot aims to reduce the burden on GPs and A&E departments by automating the triage process to determine whether someone can treat themselves at home, should book an online or in-person GP appointment, or go straight to a hospital. A Twitter user under the pseudonym of Dr Murphy first reached out to us back in 2018 alleging that Babylon Health’s chatbot was giving unsafe advice. Dr Murphy recently unveiled himself as Dr David Watkins and went public with his findings at The Royal Society of Medicine’s “Recent developments in AI and digital health 2020“ event. Over the past couple of years, Dr Watkins has provided countless examples of the chatbot giving dangerous advice. In a press release (PDF) on Monday, Babylon Health calls Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”. Read full story Source: AI News, 26 February 2020 -
Content ArticleHealthcare is advancing at a quicker rate than ever before. With the introduction of Artificial Intelligence (AI), you can now get a cancerous mole diagnosed with a mobile device. The reliance on technology has never so great. With technology predicted to replace as much as 80 per cent of a physician’s everyday routine, we must question what the new threats posed to patient safety are? This article, written by CFC Underwriting, explains some of the pitfalls of the new technology. CFC is a specialist insurance provider.
- Posted
-
- Latent error
- Omissions
-
(and 5 more)
Tagged with: