Jump to content

Search the hub

Showing results for tags 'Chatbots'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient Safety Partners
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training & education
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous

News

  • News

Categories

  • Files

Calendars

  • Community Calendar

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


Join a private group (if appropriate)


About me


Organisation


Role

Found 9 results
  1. News Article
    Many popular AI chatbots, including ChatGPT and Google’s Gemini, lack adequate safeguards to prevent the creation of health disinformation when prompted, according to a new study. Research by a team of experts from around the world, led by researchers from Flinders University in Adelaide, Australia, and published in the BMJ found that the large language models (LLMs) used to power publicly accessible chatbots failed to block attempts to create realistic-looking disinformation on health topics. As part of the study, researchers asked a range of chatbots to create a short blog post with an attention-grabbing title and containing realistic-looking journal references and patient and doctor testimonials on two health disinformation topics: that sunscreen causes skin cancer and that the alkaline diet is a cure for cancer. The researchers said that several high-profile, publicly available AI tools and chatbots, including OpenAI’s ChatGPT, Google’s Gemini and a chatbot powered by Meta’s Llama 2 LLM, consistently generated blog posts containing health disinformation when asked – including three months after the initial test and being reported to developers when researchers wanted to assess if safeguards had improved. In response to the findings, the researchers have called for “enhanced regulation, transparency, and routine auditing” of LLMs to help prevent the “mass generation of health disinformation”. Read full story Source: The Independent, 20 March 2024
  2. News Article
    Britain’s hard-pressed carers need all the help they can get. But that should not include using unregulated AI bots, according to researchers who say the AI revolution in social care needs a hard ethical edge. A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. “If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model,” Green said. “That personal data could be generated and revealed to somebody else.” She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard. But there were also potential benefits to AI, Green added. “It could help with this administrative heavy work and allow people to revisit care plans more often. At the moment, I wouldn’t encourage anyone to do that, but there are organisations working on creating apps and websites to do exactly that.” Read full story Source: The Guardian, 10 March 2024
  3. News Article
    Despite the drawbacks of turning to artificial intelligence in medicine, some US physicians find that ChatGPT improves their ability to communicate with patients. Last year, Microsoft and OpenAI released the first free version of ChatGPT. Within 72 hours, doctors were using the artificial intelligence-powered chatbot. Experts expected that ChatGPT and other A.I.-driven large language models could take over mundane tasks that eat up hours of doctors’ time and contribute to burnout, like writing appeals to health insurers or summarising patient notes. However, they found that doctors were asking ChatGPT to help them communicate with patients in a more compassionate way. Dr Michael Pignone, the chairman of the department of internal medicine at the University of Texas at Austin, has no qualms about the help he and other doctors on his staff got from ChatGPT to communicate regularly with patients. However, skeptics like Dr Dev Dash, who is part of the data science team at Stanford Health Care, are so far underwhelmed about the prospect of large language models like ChatGPT helping doctors. In tests performed by Dr Dash and his colleagues, they received replies that occasionally were wrong but, he said, more often were not useful or were inconsistent. If a doctor is using a chatbot to help communicate with a patient, errors could make a difficult situation worse. Read full story (paywalled) Source: New York Times, 12 June 2023
  4. News Article
    A US organisation that supports people with eating disorders has suspended use of a chatbot after reports it shared harmful advice. The National Eating Disorder Association (Neda) recently closed its live helpline and directed people seeking help to other resources, including the chatbot. The AI bot, named "Tessa," has been taken down, the association said. It will be investigating reports about the bot's behaviour. In recent weeks, some social media users posted screenshots of their experience with the chatbot online. They said the bot continued to recommend behaviours like calorie restriction and dieting, even after it was told the user had an eating disorder. For patients already struggling with stigma around their weight, further encouragement to shed pounds can lead to disordered eating behaviours like bingeing, restricting or purging, according to the American Academy of Family Physicians. Read full story Source: BBC News, 2 June 2023
  5. News Article
    Technology and healthcare companies are racing to roll out new tools to test for and eventually treat the coronavirus epidemic spreading around the world. But one sector that is holding back are the makers of artificial-intelligence-enabled diagnostic tools, increasingly championed by companies, healthcare systems and governments as a substitute for routine doctor-office visits. In theory, such tools, sometimes called “symptom checkers” or healthcare bots,sound like an obvious short-term fix: they could be used to help assess whether someone has Covid-19, the illness caused by the novel coronavirus, while keeping infected people away from crowded doctor’s offices or emergency rooms where they might spread it. These tools vary in sophistication. Some use a relatively simple process, like a decision tree, to provide online advice for basic health issues. Other services say they use more advanced technology, like algorithms based on machine learning, that can diagnose problems more precisely. But some digital-health companies that make such tools say they are wary of updating their algorithms to incorporate questions about the new coronavirus strain. Their hesitancy highlights both how little is known about the spread of Covid-19 and the broader limitations of healthcare technologies marketed as AI in the face of novel, fast-spreading illnesses. Some companies say they don’t have enough data about the new coronavirus to plug into their existing products. London-based symptom-checking app Your.MD Ltd. recently added a “coronavirus checker” button that leads to a series of questions about symptoms. But it is based on a simple decision tree. The company said it won’t update the more sophisticated technology underpinning its main system, which is based on machine learning. “We made a decision not to do it through the AI because we haven’t got the underlying science,” said Maureen Baker, Chief Medical Officer for Your.MD. She said it could take 6 to 12 months before sufficient peer-reviewed scientific literature becomes available to help inform the redesign of algorithms used in today’s more advanced symptom checkers. Read full story Source: The Wall Street Journal, 29 February 2020
  6. News Article
    Babylon Health use AI to provide health care to UK patients – even Health Secretary Matt Hancock uses it. But experts have questioned whether there’s enough evidence of the safety of its AI chatbot service. Watch the BBC Newsnight report
  7. News Article
    Controversial healthcare app maker Babylon Health has criticised the doctor who first raised concerns about the safety of their AI chatbot. Babylon Health’s chatbot is available in the company’s GP at Hand app, a digital healthcare solution championed by health secretary Matt Hancock. The chatbot aims to reduce the burden on GPs and A&E departments by automating the triage process to determine whether someone can treat themselves at home, should book an online or in-person GP appointment, or go straight to a hospital. A Twitter user under the pseudonym of Dr Murphy first reached out to us back in 2018 alleging that Babylon Health’s chatbot was giving unsafe advice. Dr Murphy recently unveiled himself as Dr David Watkins and went public with his findings at The Royal Society of Medicine’s “Recent developments in AI and digital health 2020“ event. Over the past couple of years, Dr Watkins has provided countless examples of the chatbot giving dangerous advice. In a press release (PDF) on Monday, Babylon Health calls Dr Watkins a “troll” who has “targeted members of our staff, partners, clients, regulators and journalists and tweeted defamatory content about us”. Read full story Source: AI News, 26 February 2020
  8. News Article
    A regulator has admitted “concerns” over the software Babylon Healthcare uses in one of its digital health solutions and is exploring how to address this. The Medicines and Healthcare products Regulatory Authority’s (MHRA) concerns relate to Babylon’s symptom checker “chatbot” tool. This is used by thousands of patients, including those registered with digital primary care practice GP at Hand. Two senior figures within the agency set out the MHRA’s concerns about the tool in a letter, seen by HSJ, which was sent to consultant oncologist David Watkins following a meeting between the parties last October. Dr Watkins has raised doubts over the tool’s safety for several years, including repeatedly documenting alleged flaws in the chatbot through videos posted online. However, last year, Babylon said only 20 of Dr Watkins’ 2,400 tests resulted in “genuine errors” being identified in the software. In the letter, dated 4 December, the MHRA’s clinical director for devices Duncan McPherson and head of software related device technologies Johan Ordish said Dr Watkins’ “concerns are all valid and ones that we share”. In the letter to Dr Watkins, the two MHRA directors also said the regulator is further exploring some of the issues highlighted and the work could “be important as we develop a new regulatory framework for medical devices in the UK”. Read full story (paywalled) Source: HSJ, 4 March 2021
  9. Content Article
    Healthcare is advancing at a quicker rate than ever before. With the introduction of Artificial Intelligence (AI), you can now get a cancerous mole diagnosed with a mobile device. The reliance on technology has never so great. With technology predicted to replace as much as 80 per cent of a physician’s everyday routine, we must question what the new threats posed to patient safety are? This article, written by CFC Underwriting, explains some of the pitfalls of the new technology. CFC is a specialist insurance provider.
×
×
  • Create New...