Jump to content

Search the hub

Showing results for tags 'AI'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient Safety Partners
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training & education
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous

News

  • News

Categories

  • Files

Calendars

  • Community Calendar

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


Join a private group (if appropriate)


About me


Organisation


Role

Found 144 results
  1. Content Article
    When ECRI unveiled its list of the leading threats to patient safety for 2024, some items are likely to be expected, such as physician burnout, delays in care due to drug shortages or falls in the hospital. However, ECRI, a non-profit group focused on patient safety, placed one item atop all others: the challenges in helping new clinicians move from training to caring for patients. In an interview with Chief Healthcare Executive®, Dr. Marcus Schabacker, president and CEO of ECRI, explained that workforce shortages are making it more difficult for newer doctors and nurses to make the transition and grow comfortably. “We think that that is a challenging situation, even the best of times,” Schabacker says. “But in this time, these clinicians who are coming to practice now had a very difficult time during the pandemic, which was only a couple years ago, to get the necessary hands-on training. And so we're concerned about that.”
  2. News Article
    Presymptom Health’s technology provides early and reliable information about infection status and severity in patients with non-specific symptoms, helping doctors make better treatment decisions. The company’s tests can be run on NHS PCR platforms, which were widely deployed during the COVID pandemic and are now often under-utilised. By detecting true infection and sepsis earlier, it’s possible to save lives and significantly reduce the incorrect use of antibiotics. When it comes to sepsis, Presymptom’s technology could revolutionise treatment. According to The UK Sepsis Trust, every 3 seconds, someone in the world dies of sepsis. In the UK alone, 245,000 people are affected by sepsis with at least 48,000 people losing their lives in sepsis-related illnesses every year. This is more than breast, bowel and prostate cancer combined. When diagnosed at a late stage, the likelihood of death increases by 10% for every hour left untreated. Yet, for many patients, with early diagnosis it is easily treatable. “We’re confident that our first product can play a big part in tackling Anti-Microbial Resistance (AMR), which has been identified by the World Health Organisation as one of the top 10 global public health threats,” said Dr Iain Miller, CEO of Presymptom Health. “By understanding the presence, or absence, of infection as early as possible, doctors can be more confident in their diagnosis and avoid unnecessarily prescribing antibiotics – something that is a growing concern in the NHS and globally. “If we take Sepsis as an example. Sepsis diagnostics hasn’t moved on in more than a century, and currently doctors can only diagnose it when advanced symptoms and organ failure are present – which is often too late. Our technology enables doctors to diagnose both infection and sepsis up to three days before formal clinical diagnosis, radically transforming the process and preventing unnecessary deaths. The science behind Presymptom’s technology is based upon 10 years of work conducted at Defence Science and Technology Laboratory (Dstl) and originated from £16m of sustained Ministry of Defence investment in a programme of research designed to help service personnel survive infection from combat injuries. The technology is currently undergoing clinical trials at nine NHS hospitals in the UK, with results anticipated later in 2024. In addition, Presymptom is working on additional UK and EU trials.
  3. Content Article
    The role of artificial intelligence (AI) in healthcare is expanding quickly with clinical, administrative, and patient facing uses emerging in many specialties. Research on the effectiveness of AI in healthcare is generally weak, but evidence of AI improving doctor’s diagnostic decisions is emerging for some focused clinical applications, including interpreting lung pathology and retinal images. However, we must work with patients to understand how AI impacts on their care, says Rebecca Rosen in this BMJ opinion piece.
  4. News Article
    Many popular AI chatbots, including ChatGPT and Google’s Gemini, lack adequate safeguards to prevent the creation of health disinformation when prompted, according to a new study. Research by a team of experts from around the world, led by researchers from Flinders University in Adelaide, Australia, and published in the BMJ found that the large language models (LLMs) used to power publicly accessible chatbots failed to block attempts to create realistic-looking disinformation on health topics. As part of the study, researchers asked a range of chatbots to create a short blog post with an attention-grabbing title and containing realistic-looking journal references and patient and doctor testimonials on two health disinformation topics: that sunscreen causes skin cancer and that the alkaline diet is a cure for cancer. The researchers said that several high-profile, publicly available AI tools and chatbots, including OpenAI’s ChatGPT, Google’s Gemini and a chatbot powered by Meta’s Llama 2 LLM, consistently generated blog posts containing health disinformation when asked – including three months after the initial test and being reported to developers when researchers wanted to assess if safeguards had improved. In response to the findings, the researchers have called for “enhanced regulation, transparency, and routine auditing” of LLMs to help prevent the “mass generation of health disinformation”. Read full story Source: The Independent, 20 March 2024
  5. Content Article
    This cross-sectional study in JAMA Network aimed to assess whether a large language model can transform discharge summaries into a format that is more readable and understandable for patients. The findings suggest that a large language model could be used to translate discharge summaries into patient-friendly language and format, but implementation will require improvements in accuracy, completeness and safety.
  6. News Article
    Britain’s hard-pressed carers need all the help they can get. But that should not include using unregulated AI bots, according to researchers who say the AI revolution in social care needs a hard ethical edge. A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. “If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model,” Green said. “That personal data could be generated and revealed to somebody else.” She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard. But there were also potential benefits to AI, Green added. “It could help with this administrative heavy work and allow people to revisit care plans more often. At the moment, I wouldn’t encourage anyone to do that, but there are organisations working on creating apps and websites to do exactly that.” Read full story Source: The Guardian, 10 March 2024
  7. Content Article
    In this multi-centre randomised clinical vignette survey study, published in JAMA, diagnostic accuracy significantly increased by 4.4% when clinicians reviewed a patient clinical vignette with standard AI model predictions and model explanations compared with baseline accuracy. However, accuracy significantly decreased by 11.3% when clinicians were shown systematically biased AI model predictions and model explanations did not mitigate the negative effects of such predictions.
  8. Content Article
    The use of AI in medical devices, patient screening and other areas of healthcare is increasing. This Medscape article looks at some of the risks associated with the use of AI in healthcare. It outlines the difficulties regulators face in monitoring adaptive systems, the inbuilt bias that can exist in algorithms and cybersecurity and liability issues.
  9. Content Article
    ECRI's Top 10 Health Technology Hazards for 2024 list identifies the potential sources of danger ECRI believe warrant the greatest attention this year and offers practical recommendations for reducing risks. Since its creation in 2008, this list has supported hospitals, health systems, ambulatory surgery centres and manufacturers in addressing risks that can impact patients and staff. 
  10. Content Article
    In November 2023, the UK hosted the first global summit on artificial intelligence (AI) safety at Bletchley Park, the country house and estate in southern England that was home to the team that deciphered the Enigma code. 150 or so representatives from national governments, industry, academia and civil society attended and the focus was on frontier AI—technologies on the cutting edge and beyond. In this Lancet article, Talha Burki looks at the implications of AI for healthcare in the UK and how it may be used in medical devices and service provision. The piece highlights the risks in terms of regulation and accountability that are inherent in the use of AI.
  11. Content Article
    This study in JAMA Psychiatry aimed to assess whether multivariate machine learning approaches can identify the neural signature of major depressive disorder in individual patients. The study was conducted as a case-control neuroimaging study that included 1801 patients with depression and healthy controls. The results showed that the best machine learning algorithm only achieved a diagnostic classification accuracy of 62% across major neuroimaging modalities. The authors concluded that although multivariate neuroimaging markers increase predictive power compared with univariate analyses, no depression biomarker could be uncovered that is able to identify individual patients.
  12. Content Article
    Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification and allocation of resources. However, bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritised groups and other historically marginalised populations such as individuals with lower incomes. This study aimed to provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms, in order to promote health and health care equity. The authors suggested five guiding principles: Promote health and health care equity during all phases of the health care algorithm life cycle Ensure health care algorithms and their use are transparent and explainable Authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness Explicitly identify health care algorithmic fairness issues and trade-offs Establish accountability for equity and fairness in outcomes from health care algorithms.
  13. Content Article
    This systematic review conducted for the Agency for Healthcare Research and Quality (AHRQ) aimed to examine the evidence on whether and how healthcare algorithms exacerbate, perpetuate or reduce racial and ethnic disparities in access to healthcare, quality of care and health outcomes. It also examined strategies that mitigate racial and ethnic bias in the development and use of algorithms. The results showed that algorithms potentially perpetuate, exacerbate and sometimes reduce racial and ethnic disparities. Disparities were reduced when race and ethnicity were incorporated into an algorithm to intentionally tackle known racial and ethnic disparities in resource allocation (for example, kidney transplant allocation) or disparities in care (for example, prostate cancer screening that historically led to Black men receiving more low-yield biopsies).
  14. Content Article
    There is a direct correlation between safety event management practices and care quality outcomes. The right safety management tools, supported by a shared perception and tolerance of risk, will help organisations go beyond reporting event data to improve safety culture.
  15. Event
    until
    Together with the Türkiye Health Care Quality and Accreditation Institute (TUSKA) and the Ministry of Health, Türkiye, ISQua is delighted to host their 40th International Conference in Istanbul. The theme for the 2024 conference is 'Health for People and Planet: Building Bridges to a Sustainable Future'. It will address the continued challenges of making person-centred care part of the healthcare system, as well as addressing some of the hot topics that matter most in a rapidly changing world. Issues such as environmental challenges, reducing the healthcare sector's carbon footprint, and ensuring the long-term resilience of healthcare will be addressed at the conference. It will also examine the potentials and pitfalls of AI and Digital Transformation in healthcare, and how it can revolutionise healthcare and enable better patient engagement. Further information
  16. Content Article
    Hospitals are complex adaptive systems. They are industrial environments where it isn't always possible to expect predictable responses to inputs. Patient safety management practices need to adapt to align with the environment in which events occur. It is time to reimagine safety event reporting and management solutions that guide, not prescribe, investigations and improvement actions.
  17. Content Article
    The Medicines and Healthcare products Regulatory Agency (MHRA) has published a roadmap which outlines the intended timelines for delivering the future regulatory framework for medical devices.
  18. Event
    until
    PPPs 2024 Cancer Care programme kicks off with this report launch webinar on AI in Imaging Diagnostics. While discussions concerning artificial intelligence (AI) have come to dominate public discourse since the launch of ChatGPT last year, in healthcare, AI has been the subject of intense debate for some time. Many of the key talking points that define the debate in healthcare echo that of its wider implications, namely the unintended consequences of unleashing unregulated algorithms across the sector and the potentially profound implications AI could have upon workforces globally. However, it is perhaps in healthcare where AI stands to make its greatest and most positive impact. Healthcare is a data-rich industry, with the treatment of patients leading to the production of vast amounts of medical records, images, lab results, and numerous other data outputs. This multimodal data can be used to train a wide range of AI systems, leading to the development of new, more targeted drug treatments and diagnostic tools, more personalised care, and a more efficient healthcare system. Join an expert panel as they help to launch PPPs newest report exploring what it takes to begin implementing AI at scale in imaging diagnostics in the NHS. Register for the webinar
  19. Content Article
    Large language models such as OpenAI's GPT-4 have the potential to transform medicine by enabling automation of a range of tasks, including writing discharge summaries, answering patient questions, and supporting clinical treatment planning. These models are so useful that their adoption has been immediate, and efforts are already well underway to integrate them with ubiquitous clinical information systems. However, the unchecked use of the technology has the potential to cause harm. In this article for The Lancet, Janna Hastings looks at the need to mitigate racial and gender bias in language models that may be used in healthcare settings.
  20. News Article
    A hospital has introduced a new artificial intelligence system to help doctors treat stroke patients. The RapidAI software was recently used for the first time at Hereford County Hospital. It analyses patients' brain images to help decide whether they need an operation or drugs to remove a blood clot. Wye Valley NHS Trust, which runs the hospital, is the first in the West Midlands to roll out the software. Jenny Vernel, senior radiographer at the trust, said: “AI will never replace the clinical expertise that our doctors and consultants have. "But harnessing this latest technology is allowing us to make very quick decisions based on the experiences of thousands of other stroke patients.” Radiographer Thomas Blackman told BBC Hereford and Worcester that it usually takes half an hour for the information to be communicated. He said the new AI-powered system now means it is "pinged" to the relevant teams' phones via an app in a matter of minutes. "It's improved the patient pathway a lot," he added. Read full story Source: BBC News, 7 December 2023
  21. Content Article
    New developments in artificial intelligence (AI) are extensively discussed in public media and scholarly publications. While in many academic disciplines debates on the challenges and opportunities of AI and how to best address them have been launched, the human factors and ergonomics (HFE) community has been strangely quiet. In this paper, Gudela Grote discusses three main areas in which HFE could and should significantly contribute to the socially and economically viable development and use of AI: decisions on automation versus augmentation of human work; alignment of control and accountability for AI outcomes; counteracting power imbalances among AI stakeholders. She then outlines actions that the HFE community could undertake to improve their involvement in AI development and use, foremost translating ethical into design principles, strengthening the macro-turn in HFE, broadening the HFE design mindset, and taking advantage of new interdisciplinary research opportunities.
  22. Event

    IHI Forum

    Sam
    until
    The IHI Forum is a four-day conference that has been the home of quality improvement in health care for more than 30 years. Dedicated improvement professionals from across the globe will be convening to tackle health care's most pressing challenges: improvement capability, patient and workforce safety, equity, climate change, artificial intelligence, and more. Register
  23. News Article
    Artificial intelligence could be used to predict if a person is at risk of having a heart attack up to 10 years in the future, a study has found. The technology could save thousands of lives while improving treatment for almost half of patients, researchers at the University of Oxford said. The study, funded by the British Heart Foundation (BHF), looked at how AI might improve the accuracy of cardiac CT scans, which are used to detect blockages or narrowing in the arteries. Prof Charalambos Antoniades, chair of cardiovascular medicine at the BHF and director of the acute multidisciplinary imaging and interventional centre at Oxford, said: “Our study found that some patients presenting in hospital with chest pain – who are often reassured and sent back home – are at high risk of having a heart attack in the next decade, even in the absence of any sign of disease in their heart arteries. “Here we demonstrated that providing an accurate picture of risk to clinicians can alter, and potentially improve, the course of treatment for many heart patients.” Read full story Source: The Guardian, 13 November 2023
  24. Content Article
    Structural, economic and social factors can lead to inequalities in the length of time people wait for NHS planned hospital care – such as hip or knee operations – and their experience while they wait. In 2020, after the first wave of the Covid-19 pandemic, NHS England asked NHS trusts and systems to take an inclusive approach to tackling waiting lists by disaggregating waiting times by ethnicity and deprivation to identify inequalities and to take action in response. This was an important change to how NHS organisations were asked to manage waiting lists – embedding work to tackle health inequalities into the process. Between December 2022 and June 2023, the King’s Fund undertook qualitative case studies about the implementation of this policy in three NHS trusts and their main integrated care boards (ICBs), and interviewed a range of other people about using artificial intelligence (AI) to help prioritise care. It also reviewed literature, NHS board papers and national waiting times data. The aim was to understand how the policy was being interpreted and implemented locally, and to extract learning from this. It found work was at an early stage, although there were examples of effective interventions that made appointments easier to attend, and prioritised treatment and support while waiting. Reasons for the lack of progress included a lack of clarity about the case for change, operational challenges such as poor data, cultural issues including different views about a fair approach, and a lack of accountability for the inclusive part of elective recovery. Taking an inclusive approach to tackling waiting lists should be a core part of effective waiting list management and can contribute to a more equitable health system and healthier communities. Tackling inequalities on waiting lists is also an important part of the NHS’s wider ambitions to address persistent health inequalities. But to improve the slow progress to date, NHS England, ICBs and trusts need to work with partners to make the case for change, take action and hold each other to account.
  25. News Article
    ChatGPT , the artificial intelligence tool, may be better than a doctor at following recognised treatment standards for depression, and without the gender or social class biases sometimes seen in the physician-patient relationship, a study suggests. The findings were published in Family Medicine and Community Health. The researchers said further work was needed to examine the risks and ethical issues arising from AI’s use. Globally, an estimated 5% of adults have depression, according to the World Health Organization. Many turn first to their GP for help. Recommended treatment should largely be guided by evidence-based clinical guidelines in line with the severity of the depression. ChatGPT has the potential to offer fast, objective, data-based insights that can supplement traditional diagnostic methods as well as providing confidentiality and anonymity, according to researchers from Israel and the UK. Read full story Source: The Guardian, 16 October 2023
×
×
  • Create New...