Jump to content

Search the hub

Showing results for tags 'AI'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient Safety Partners
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • Patient Safety Standards
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training & education
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous

News

  • News

Categories

  • Files

Calendars

  • Community Calendar

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


Join a private group (if appropriate)


About me


Organisation


Role

Found 155 results
  1. Content Article
    Surgical Site Infections (SSIs) can have subtle, early signs that are not readily identifiable. This study aimed to develop a machine learning algorithm that could identify early SSIs based on thermal images. Images were taken of surgical incisions on 193 patients who underwent a variety of surgical procedures, but only five of these patients developed SSIs, which limited testing of the models developed. However, the authors were able to generate two models to successfully segment wounds. This proof-of-concept demonstrates that computer vision has the potential to support future surgical applications.
  2. Content Article
    As the NHS’s digital transformation journey enters a new phase, there are opportunities to improve the quality and productivity of the healthcare system. This phase is not just about advancing the maturity of electronic health records (EHRs) but also about embracing the vast potential of generative artificial intelligence tools. In this HSJ article, Robert Wachter and Harpreet Sood explore the reasons why EHRs have not yet delivered promised productivity improvements and look at how GenAI offers opportunities for the NHS to realise productivity benefits faster, cheaper and at a greater scale.
  3. Content Article
    In this report for Stat, technology correspondent Casey Ross looks at the dangers involved in using AI to predict patient outcomes, especially in life-or-death situations such as suspected sepsis. He looks at the recent case of US electronic health record provider Epic who were force to rewrite the algorithm being used by tens of thousands of US clinicians to predict sepsis.
  4. Content Article
    Incident reports of medication errors are valuable learning resources for improving patient safety. However, key information is often contained within unstructured free text, which prevents automated analysis and limits the usefulness of these data. Natural language processing can be used to structure this free text automatically and retrieve relevant past incidents and learning materials, but this requires a large, fully annotated and validated set of incident reports. This study in Nature used a set of 58,658 machine-annotated incident reports of medication errors to test a natural language processing model. The authors provide access to the validation datasets and machine annotator for labelling future incident reports of medication errors.
  5. News Article
    Investigators have applied artificial intelligence (AI) techniques to gait analyses and medical records data to provide insights about individuals with leg fractures and aspects of their recovery. The study, published in the Journal of Orthopaedic Research, uncovered a significant association between the rates of hospital readmission after fracture surgery and the presence of underlying medical conditions. Correlations were also found between underlying medical conditions and orthopedic complications, although these links were not significant. It was also apparent that gait analyses in the early postinjury phase offer valuable insights into the injury’s impact on locomotion and recovery. For clinical professionals, these patterns were key to optimizing rehabilitation strategies. "Our findings demonstrate the profound impact that integrating machine learning and gait analysis into orthopaedic practice can have, not only in improving the accuracy of post-injury complication predictions but also in tailoring rehabilitation strategies to individual patient needs," said corresponding author Mostafa Rezapour, PhD, of Wake Forest University School of Medicine. "This approach represents a pivotal shift towards more personalised, predictive, and ultimately more effective orthopaedic care." Read full story Source: Digital Health News, 12 April 2024
  6. News Article
    Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is 100 times faster and improves image contrast 3.5-fold. The advance, they say, will provide researchers with a better tool to evaluate age-related macular degeneration (AMD) and other retinal diseases. "Artificial intelligence helps overcome a key limitation of imaging cells in the retina, which is time," said Johnny Tam, Ph.D., who leads the Clinical and Translational Imaging Section at NIH's National Eye Institute. Tam is developing a technology called adaptive optics (AO) to improve imaging devices based on optical coherence tomography (OCT). Like ultrasound, OCT is noninvasive, quick, painless, and standard equipment in most eye clinics. "Our results suggest that AI can fundamentally change how images are captured," said Tam. "Our P-GAN artificial intelligence will make AO imaging more accessible for routine clinical applications and for studies aimed at understanding the structure, function, and pathophysiology of blinding retinal diseases. Thinking about AI as a part of the overall imaging system, as opposed to a tool that is only applied after images have been captured, is a paradigm shift for the field of AI." Read full story Source: Digital Health News, 11 April 2024
  7. Event
    until
    Bevan Brittan are delighted to announce their next Digital Health and Care Forum. This session will be hosted in partnership with the Masala Network, the health and life science network for South Asians in the UK. At this in-person event, we will be joined by a panel of experts in the field including: Dr Amrita Kumar, AI Clinical Lead and Consultant Radiologist at NHS Haris Shuaib, Founder and CEO at Newton’s Tree Helen Hughes, Chief Executive at Patient Safety Learning Hassan Chaudhury, Co-Founder at Vita Health Care Solutions Daniel Morris, Partner at Bevan Brittan AI and data-driven technology continues to revolutionise health and care at a giddying pace and offers enormous opportunity to shape and future-proof health systems that could be more affordable, sustainable and equitable. But to fully realise AI’s transformational potential there is a pressing need to ensure public and clinical buy-in. In this session we will consider how evidence and ethical based frameworks and guidelines, patient safety measures and regulation can all foster trust in AI. The Digital Health & Care Forum is intended to be an interactive session and an opportunity and safe space to exchange views, identify and explore key issues and share knowledge. Our events are attended by developers, purchasers, providers, funders, insurers and policy makers. Register
  8. Content Article
    Currently, surgical site infection surveillance relies on labour-intensive manual chart review. Recently suggested solutions involve machine learning to identify surgical site infections directly from the medical record. Deep learning is a form of machine learning that has historically performed better than traditional methods, while being harder to interpret. This study proposed a deep learning model—an explainable long short-term memory network—for the identification of surgical site infection from the medical record. The study found that the model had greater sensitivity when compared to traditional machine learning methods.
  9. News Article
    Drugs are a cornerstone of medicine, but sometimes doctors make mistakes when prescribing them and patients don’t take them properly. A new AI tool developed at Oxford University aims to tackle both those problems. DrugGPT offers a safety net for clinicians when they prescribe medicines and gives them information that may help their patients better understand why and how to take them. Doctors and other healthcare professionals who prescribe medicines will be able to get an instant second opinion by entering a patient’s conditions into the chatbot. Prototype versions respond with a list of recommended drugs and flag up possible adverse effects and drug-drug interactions. “One of the great things is that it then explains why,” said Prof David Clifton, whose team at Oxford’s AI for Healthcare lab led the project. “It will show you the guidance – the research, flowcharts and references – and why it recommends this particular drug.” Read full story Source: The Guardian, 31 March 2024
  10. Content Article
    This study aimed to find out whether using an artificial intelligence (AI) deterioration model decreased the risk of escalations in care during hospitalisation. The study's findings suggest that use of an AI model is associated with a decreased risk of escalations in care.
  11. News Article
    The NHS is set to roll out artificial intelligence (AI) to reduce the number of missed appointments and free up staff time to help bring down the waiting list for elective care. The expansion to ten more NHS Trusts follows a successful pilot in Mid and South Essex NHS Foundation Trust, which has seen the number of did not attends (DNAs) slashed by almost a third in six months. Created by Deep Medical and co-designed by a frontline worker and NHS clinical fellow, the software predicts likely missed appointments through algorithms and anonymised data, breaking down the reasons why someone may not attend an appointment using a range of external insights including the weather, traffic, and jobs, and offers back-up bookings. The appointments are then arranged for the most convenient time for patients – for example, it will give evening and weekend slots to those less able to take time off during the day. The system also implements intelligent back-up bookings to ensure no clinical time is lost while maximising efficiency. It has been piloted for six months at Mid and South Essex NHS Foundation Trust, leading to a 30% fall in non-attendances. A total of 377 DNAs were prevented during the pilot period and an additional 1,910 patients were seen. It is estimated the trust, which supports a population of 1.2 million people, could save £27.5 million a year by continuing with the programme. The AI software is now being rolled out to ten more trusts across England in the coming months. Read full story Source: NHS England, 14 March 2024
  12. Content Article
    When ECRI unveiled its list of the leading threats to patient safety for 2024, some items are likely to be expected, such as physician burnout, delays in care due to drug shortages or falls in the hospital. However, ECRI, a non-profit group focused on patient safety, placed one item atop all others: the challenges in helping new clinicians move from training to caring for patients. In an interview with Chief Healthcare Executive®, Dr. Marcus Schabacker, president and CEO of ECRI, explained that workforce shortages are making it more difficult for newer doctors and nurses to make the transition and grow comfortably. “We think that that is a challenging situation, even the best of times,” Schabacker says. “But in this time, these clinicians who are coming to practice now had a very difficult time during the pandemic, which was only a couple years ago, to get the necessary hands-on training. And so we're concerned about that.”
  13. News Article
    Presymptom Health’s technology provides early and reliable information about infection status and severity in patients with non-specific symptoms, helping doctors make better treatment decisions. The company’s tests can be run on NHS PCR platforms, which were widely deployed during the COVID pandemic and are now often under-utilised. By detecting true infection and sepsis earlier, it’s possible to save lives and significantly reduce the incorrect use of antibiotics. When it comes to sepsis, Presymptom’s technology could revolutionise treatment. According to The UK Sepsis Trust, every 3 seconds, someone in the world dies of sepsis. In the UK alone, 245,000 people are affected by sepsis with at least 48,000 people losing their lives in sepsis-related illnesses every year. This is more than breast, bowel and prostate cancer combined. When diagnosed at a late stage, the likelihood of death increases by 10% for every hour left untreated. Yet, for many patients, with early diagnosis it is easily treatable. “We’re confident that our first product can play a big part in tackling Anti-Microbial Resistance (AMR), which has been identified by the World Health Organisation as one of the top 10 global public health threats,” said Dr Iain Miller, CEO of Presymptom Health. “By understanding the presence, or absence, of infection as early as possible, doctors can be more confident in their diagnosis and avoid unnecessarily prescribing antibiotics – something that is a growing concern in the NHS and globally. “If we take Sepsis as an example. Sepsis diagnostics hasn’t moved on in more than a century, and currently doctors can only diagnose it when advanced symptoms and organ failure are present – which is often too late. Our technology enables doctors to diagnose both infection and sepsis up to three days before formal clinical diagnosis, radically transforming the process and preventing unnecessary deaths. The science behind Presymptom’s technology is based upon 10 years of work conducted at Defence Science and Technology Laboratory (Dstl) and originated from £16m of sustained Ministry of Defence investment in a programme of research designed to help service personnel survive infection from combat injuries. The technology is currently undergoing clinical trials at nine NHS hospitals in the UK, with results anticipated later in 2024. In addition, Presymptom is working on additional UK and EU trials.
  14. Content Article
    The role of artificial intelligence (AI) in healthcare is expanding quickly with clinical, administrative, and patient facing uses emerging in many specialties. Research on the effectiveness of AI in healthcare is generally weak, but evidence of AI improving doctor’s diagnostic decisions is emerging for some focused clinical applications, including interpreting lung pathology and retinal images. However, we must work with patients to understand how AI impacts on their care, says Rebecca Rosen in this BMJ opinion piece.
  15. News Article
    Many popular AI chatbots, including ChatGPT and Google’s Gemini, lack adequate safeguards to prevent the creation of health disinformation when prompted, according to a new study. Research by a team of experts from around the world, led by researchers from Flinders University in Adelaide, Australia, and published in the BMJ found that the large language models (LLMs) used to power publicly accessible chatbots failed to block attempts to create realistic-looking disinformation on health topics. As part of the study, researchers asked a range of chatbots to create a short blog post with an attention-grabbing title and containing realistic-looking journal references and patient and doctor testimonials on two health disinformation topics: that sunscreen causes skin cancer and that the alkaline diet is a cure for cancer. The researchers said that several high-profile, publicly available AI tools and chatbots, including OpenAI’s ChatGPT, Google’s Gemini and a chatbot powered by Meta’s Llama 2 LLM, consistently generated blog posts containing health disinformation when asked – including three months after the initial test and being reported to developers when researchers wanted to assess if safeguards had improved. In response to the findings, the researchers have called for “enhanced regulation, transparency, and routine auditing” of LLMs to help prevent the “mass generation of health disinformation”. Read full story Source: The Independent, 20 March 2024
  16. Content Article
    This cross-sectional study in JAMA Network aimed to assess whether a large language model can transform discharge summaries into a format that is more readable and understandable for patients. The findings suggest that a large language model could be used to translate discharge summaries into patient-friendly language and format, but implementation will require improvements in accuracy, completeness and safety.
  17. News Article
    Britain’s hard-pressed carers need all the help they can get. But that should not include using unregulated AI bots, according to researchers who say the AI revolution in social care needs a hard ethical edge. A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. “If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model,” Green said. “That personal data could be generated and revealed to somebody else.” She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard. But there were also potential benefits to AI, Green added. “It could help with this administrative heavy work and allow people to revisit care plans more often. At the moment, I wouldn’t encourage anyone to do that, but there are organisations working on creating apps and websites to do exactly that.” Read full story Source: The Guardian, 10 March 2024
  18. Content Article
    In this multi-centre randomised clinical vignette survey study, published in JAMA, diagnostic accuracy significantly increased by 4.4% when clinicians reviewed a patient clinical vignette with standard AI model predictions and model explanations compared with baseline accuracy. However, accuracy significantly decreased by 11.3% when clinicians were shown systematically biased AI model predictions and model explanations did not mitigate the negative effects of such predictions.
  19. Content Article
    The use of AI in medical devices, patient screening and other areas of healthcare is increasing. This Medscape article looks at some of the risks associated with the use of AI in healthcare. It outlines the difficulties regulators face in monitoring adaptive systems, the inbuilt bias that can exist in algorithms and cybersecurity and liability issues.
  20. Content Article
    ECRI's Top 10 Health Technology Hazards for 2024 list identifies the potential sources of danger ECRI believe warrant the greatest attention this year and offers practical recommendations for reducing risks. Since its creation in 2008, this list has supported hospitals, health systems, ambulatory surgery centres and manufacturers in addressing risks that can impact patients and staff. 
  21. Content Article
    In November 2023, the UK hosted the first global summit on artificial intelligence (AI) safety at Bletchley Park, the country house and estate in southern England that was home to the team that deciphered the Enigma code. 150 or so representatives from national governments, industry, academia and civil society attended and the focus was on frontier AI—technologies on the cutting edge and beyond. In this Lancet article, Talha Burki looks at the implications of AI for healthcare in the UK and how it may be used in medical devices and service provision. The piece highlights the risks in terms of regulation and accountability that are inherent in the use of AI.
  22. Content Article
    This study in JAMA Psychiatry aimed to assess whether multivariate machine learning approaches can identify the neural signature of major depressive disorder in individual patients. The study was conducted as a case-control neuroimaging study that included 1801 patients with depression and healthy controls. The results showed that the best machine learning algorithm only achieved a diagnostic classification accuracy of 62% across major neuroimaging modalities. The authors concluded that although multivariate neuroimaging markers increase predictive power compared with univariate analyses, no depression biomarker could be uncovered that is able to identify individual patients.
  23. Content Article
    Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification and allocation of resources. However, bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritised groups and other historically marginalised populations such as individuals with lower incomes. This study aimed to provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms, in order to promote health and health care equity. The authors suggested five guiding principles: Promote health and health care equity during all phases of the health care algorithm life cycle Ensure health care algorithms and their use are transparent and explainable Authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness Explicitly identify health care algorithmic fairness issues and trade-offs Establish accountability for equity and fairness in outcomes from health care algorithms.
  24. Content Article
    This systematic review conducted for the Agency for Healthcare Research and Quality (AHRQ) aimed to examine the evidence on whether and how healthcare algorithms exacerbate, perpetuate or reduce racial and ethnic disparities in access to healthcare, quality of care and health outcomes. It also examined strategies that mitigate racial and ethnic bias in the development and use of algorithms. The results showed that algorithms potentially perpetuate, exacerbate and sometimes reduce racial and ethnic disparities. Disparities were reduced when race and ethnicity were incorporated into an algorithm to intentionally tackle known racial and ethnic disparities in resource allocation (for example, kidney transplant allocation) or disparities in care (for example, prostate cancer screening that historically led to Black men receiving more low-yield biopsies).
  25. Content Article
    There is a direct correlation between safety event management practices and care quality outcomes. The right safety management tools, supported by a shared perception and tolerance of risk, will help organisations go beyond reporting event data to improve safety culture.
×
×
  • Create New...