Jump to content

Search the hub

Showing results for tags 'AI'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Patient Safety Alerts
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient Safety Partners
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • Patient Safety Standards
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training & education
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous

News

  • News

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


About me


Organisation


Role

Found 274 results
  1. Content Article
    The conversation around Artificial Intelligence in healthcare is evolving. We are moving beyond the hype of algorithms and automation, and beginning to ask the more important questions: How do we ensure AI serves our people and patients? How do we adopt it in ways that strengthen the health and care system as a whole? What kind of leadership do we need to make that happen? More than that, how do we harness the reality of AI now even as we set a path for AI in the future?  It is no longer enough to explore what AI can do. We must focus on how it should be done and why. Dr Nnenna Osuji explores the shift from 'what' to 'how' in her blog. Related reading on the hub: AI in London healthcare: The reality behind the hype
  2. Content Article
    Artificial intelligence (AI) is often portrayed with the ability to transform the delivery of care, improving patient outcomes, and easing pressures on an overstretched health system. While its potential is undeniable, the conversation around AI can sometimes become tangled with hype, complexity, and uncertainty. The reality is that AI is not a magic bullet, but another tool in our broader mission to solve the pressing healthcare challenges we are facing and redesign a more resilient health system. London is uniquely positioned to lead the way in transforming healthcare delivery, access, and experience. With world-class medical institutions, top-tier universities, and a thriving tech ecosystem, the city is at the forefront of innovation. Its rich, diverse multimodal data resources could further help to advance AI development. Many London NHS providers are already pioneering AI adoption, piloting solutions in diagnostics, workflow automation, and intelligent proactive care, exploring the art of possible of AI to drive smarter, more efficient, and patient. However, every deployment of AI needs to be purposeful, evidence-based, and aligned with real system needs. This report from the UCL Partners Health Innovation is about cutting through the noise around AI, and offering a better understanding of its current adoption within the London system. It presents insights from across different providers into where AI is making an impact, the obstacles its facing, and what needs to change to unlock its potential.
  3. Event
    until
    This joint webinar, hosted by the Digital Care Hub and Homecare Association, will be an insightful session on the safe use of AI in homecare. This discussion is tailored for adult social care providers in England, and will include: Provider Insights: Hear from industry experts and social care providers on the latest trends and challenges. Innovative AI Applications: Discover exciting examples of AI transforming homecare. Ethical and Safety Considerations: Learn about the key ethical and safety aspects to consider. Practical Tips: Get top tips for implementing AI effectively in your practice. This session is designed for adult social care providers in England and are aimed at people who make decisions about the use of technology in care services. Register
  4. News Article
    California-based Stanford Health Care is piloting an internally developed, AI-backed software designed to revolutionize clinician interaction with the electronic health record (EHR) Nigam Shah, chief data science officer at Stanford Health Care, is leading the development team for ChatEHR, which allows clinicians to ask questions, request summaries and pull specific information from a patient’s medical record. ChatEHR is built directly into Stanford’s EHR to maximise clinical workflow. The pilot is available to a small cohort of 33 physicians, nurses and physician assistants. The technology is secure and designed for information gathering; not medical advice. ChatEHR, which has been in development since 2023, facilitates a more streamlined and efficient way for clinicians to interact with patient records. “This is a unique instance of integrating [large language model] capabilities directly into clinicians’ practice and workflow,” said Michael Pfeffer, MD, chief information and digital officer at Stanford Health Care and School of Medicine, in a news release. “We’re thrilled to bring this to the workforce at Stanford Health Care.” Stanford is still working on automation to evaluate tasks, such as determining whether to transfer patients between hospitals or units. Dr Shah and his team are using an open-source framework for real-world large language model evaluation, MedHELM, to evaluate ChatEHR. His goal is to scale ChatEHR to all clinicians and the team is working on more features to ensure accuracy. “We’re rolling this out in accordance with our responsible AI guidelines, not only ensuring accuracy and performance, but making sure we have the educational resources and technical support available to make ChatEHR usable to our workforce,” said Dr Shah in the release. Read full story Source: Becker's Health IT, 6 June 2025
  5. News Article
    NHS England has paused a major AI project after concerns were raised about how the primary care records of 57 million people were used to train it. The Joint General Practice IT Committee (JGPITC) – a collaboration between the Royal College of General Practitioners and the British Medical Association – wrote to NHSE last month to question the lawfulness of the Foresight AI project. Foresight is the result of a data sharing agreement between NHSE and a consortium of researchers brought together by the British Heart Foundation. It will be used to predict potential future outcomes for patients that could be used to identify opportunities for early intervention. In its letter, the JGPITC said it was “very surprised and extremely concerned” to learn of the project, which used the GPES Data for Pandemic Planning and Research dataset to train the AI model. The committee said it had “serious concerns about the lawfulness of the data use for this project” and the “apparent absence of strict governance arrangements”. An NHSE spokesperson said: “Maintaining patient privacy is central to this project and we are grateful to the Joint GP IT Committee for raising its concerns and meeting with us to discuss the strict governance and controls in place to ensure patients’ data remains secure.” Read full story (paywalled) Source: HSJ, 4 June 2025
  6. News Article
    Despite ongoing efforts to improve patient safety, it’s estimated that at least 1 in 20 patients still experience medical mistakes in the health care system. One of the most common categories of mistakes is medication errors, where for one reason or another, a patient is given either the wrong dose of a drug or the wrong drug altogether. In the US, these errors injure approximately 1.3 million people a year and result in one death each day, according to the World Health Organization. In response, many hospitals have introduced guardrails, ranging from colour coding schemes that make it easier to differentiate between similarly named drugs, to barcode scanners that verify that the correct medicine has been given to the correct patient. Despite these attempts, medication mistakes still occur with alarming regularity. Dr Kelly Michaelsen, an assistant professor of anaesthesiology and pain medicine at the University of Washington wondered whether emerging technologies could help. As both a medical professional and a trained engineer, it struck her that spotting an error about to be made, and alerting the anaesthesiologists in real time, should be within the capabilities of AI. “I was like, ‘This seems like something that shouldn’t be too hard for AI to do,’” she said. “Ninety-nine percent of the medications we use are these same 10-20 drugs, and so my idea was that we could train an AI to recognize them and act as a second set of eyes.” Michaelsen focused on vial swap errors, which account for around 20% of all medication mistakes. All injectable drugs come in labelled vials, which are then transferred to a labelled syringe on a medication cart in the operating room. But in some cases, someone selects the wrong vial, or the syringe is labelled incorrectly, and the patient is injected with the wrong drug. Michaelsen thought such tragedies could be prevented through “smart eyewear” — adding an AI-powered wearable camera to the protective eyeglasses worn by all staff during operations. Working with her colleagues in the University of Washington computer science department, she designed a system that can scan the immediate environment for syringe and vial labels, read them and detect whether they match up. In a study published late last year, Michaelsen reported that the device detected vial swap errors with 99.6% accuracy. All that’s left is to decide the best way for warning messages to be relayed and it could be ready for real-world use, pending Food and Drug Administration clearance. Read full story Source: NBC News, 25 May 2025
  7. Content Article
    With hopes for the far-reaching impact of artificial intelligence (AI) on health care remaining as strong as ever, the NIHR Rapid Service Evaluation Team have conducted a review of the literature on the role of AI in radiology diagnostics. Emma Dodsworth and Rachel Lawrence describe the three findings from the work that stood out the most.
  8. Content Article
    The NHS and social care systems in England are on a journey towards digitalisation. One particular technology that is generating high levels of both excitement and scepticism is artificial intelligence (AI). While many are excited by the opportunities offered by AI, others may be feeling more doubtful or unsure.   This long read from the King's Fund provides context for how this technology is, or could be, applied. Further reading on the hub: How do we harness technology responsibly to safeguard and improve patient care? Putting patients at the heart of digital health
  9. Content Article
    Leadership Futures recently published a report 'Harnessing technology for human progress: Advancing into Industry 5.0', which is driven by a bold ambition: to transform organisations worldwide through technological advancements. In this blog, Caroline Beardall looks at the implications of this for healthcare and suggests five actions that organisation's should take to ensure we achieve the benefits from technology while keeping patient safety at the forefront of an evolving landscape. The recent Leadership Futures report 'Harnessing technology for human progress: Advancing into Industry 5.0' provides a valuable framework for integrating technology with human-centered leadership, which is highly applicable to advancing patient safety in health and care. Its vision of Industry 5.0 as a collaborative human-AI partnership offers a route to reduce errors, enhance clinician capacity and improve patient outcomes. However, realising these benefits requires caution—ethical and inclusive implementation strategies that address the complexities and risks unique to health and care settings. It throws up three fundamental challenges: How can healthcare leaders ensure AI tools are safe to use and that clinical staff can trust them? Who should be responsible if an AI system makes a mistake that affects a patient? How can healthcare organisations use technology to work better without losing the importance of human interaction and the skills needed for high levels of patient satisfaction and safety? In order to answer these questions, and deepen the discussion on harnessing technology responsibly to safeguard and improve patient care, there are some actions we can take to build on the report and begin to gain evidence and experience specific to healthcare. As the landscape of healthcare shifts and evolves, we should consider applying the following five actions (with examples of how to do this) so we can achieve the maximum benefits from technology for patient safety. 1. Foster collective, collaborative leadership across boundaries Leaders should actively promote cooperation and shared responsibility across organisational and professional boundaries, focusing on the overall patient journey rather than siloed departmental goals. This aligns with the report’s emphasis on human-machine collaboration and the need for integrative leadership cultures that support safe, seamless care delivery. By working collectively, leaders can ensure technology is implemented with broad input and oversight, reducing risks and enhancing patient safety. Implement interdisciplinary collaboration practices: Organise regular team meetings involving diverse healthcare professionals to discuss patient care holistically, ensuring all voices contribute to decision making. Create shared goals and aligned metrics: Develop common objectives focused on patient safety and quality that unify departments and reduce siloed working. Lead by example: Demonstrate collaborative behaviours and openness to input, encouraging a culture of trust and teamwork. 2. Embed ethical, human-centred use of technology Leaders must champion ethical principles in technology adoption, ensuring AI and digital tools augment rather than replace human judgment and empathy. This includes rigorous validation of new technologies, transparency in AI decision-making, and ongoing monitoring to prevent harm or bias. Prioritising patient experience and human values in technology deployment safeguards safety and trust. Prioritise transparency and clinician involvement: Engage frontline staff early in AI and technology design and deployment to ensure tools meet clinical needs and ethical standards. Establish continuous monitoring and feedback loops: Use data and user feedback to identify and mitigate risks or biases in technology that could impact patient safety. Promote ethical leadership training: Equip leaders with skills to balance innovation with patient experience and accountability. 3. Develop and support workforce readiness and engagement Preparing staff to work effectively alongside new technologies is vital. Leaders should invest in training that builds digital literacy, critical thinking and resilience, while also fostering a positive work climate where staff feel valued and supported. Engaged and confident clinicians are better able to use technology safely and maintain high standards of care. Invest in targeted training and digital upskilling: Provide contextual, in-app guidance and interactive training to help staff adopt new technologies confidently and efficiently. Foster a culture of psychological safety and empowerment: Encourage open discussion, honest feedback and staff involvement in decision making to build trust and resilience. Practice empathetic leadership: Focus on emotional and professional needs of staff to reduce burnout and improve engagement. 4. Set clear, aligned objectives focused on quality and safety Leadership should establish clear, challenging and aligned goals at every level that prioritise patient safety and quality improvement over mere efficiency or target-driven metrics. This clarity helps reduce staff stress and confusion, enabling teams to focus on delivering compassionate, safe care supported by technology. Communicate clear expectations and priorities: Use consistent, transparent communication to align teams around patient safety goals and reduce ambiguity. Implement continuous feedback and learning systems: Regularly review performance data and patient feedback to refine objectives and improve care quality. Balance efficiency with human factors: Ensure operational goals do not compromise critical human skills or patient-centred care. 5. Champion diversity, inclusion and accountability in leadership Inclusive leadership practices that promote equality and diversity are essential to fostering innovation and ethical decision-making in healthcare technology adoption. Leaders must also clarify accountability frameworks for technology-related decisions and errors, ensuring responsibility is shared and transparent to maintain patient safety. Promote inclusive leadership practices: Value diverse perspectives and foster equity to enhance innovation and ethical decision-making Clarify accountability frameworks: Define roles and responsibilities clearly, especially concerning technology-related decisions and errors, to maintain trust and safety Model human-centred leadership traits: Practice self-awareness, compassion and mindfulness to create cultures of excellence, trust, and caring. By integrating these strategies, human-centric leaders can effectively translate the insights from the Leadership Futures report into practical actions that improve patient safety, staff satisfaction and overall health system resilience. This approach embraces complexity and change as opportunities, not obstacles, which then enables sustainable progress in better health and care delivery. Further reading Amelia N. 6 Effective Leadership Strategies for Healthcare in 2025. Edstellar, 31 December 2024. West M, et al. Leadership in Healthcare: a Summary of the Evidence Base. Kings Fund; Faculty of Medical Leadership and Management; Center for Creative Leadership, 2015. LeClerc L, Kennedy K, Campis S. Human-Centered Leadership in Health Care: An Idea That's Time Has Come. Nursing Administration Quarterly 2020; 44(2):p 117-26.
  10. Content Article
    Large Language Models (LLMs) are transforming the way in which people interact with artificial intelligence. This study explores how safety professionals might use LLMs for a FRAM analysis. The authors use interactive prompting with Google Bard / Gemini and ChatGPT to do a FRAM analysis on examples from healthcare and aviation. The exploratory findings suggest that LLMs afford safety analysts the opportunity to enhance the FRAM analysis by facilitating initial model generation and offering different perspectives. Responsible and effective utilisation of LLMs requires careful consideration of their limitations as well as their abilities. Human expertise is crucial both with regards to validating the output of the LLM as well as in developing meaningful interactive prompting strategies to take advantage of LLM capabilities such as self-critiquing from different perspectives. Further research is required on effective prompting strategies, and to address ethical concerns.
  11. Content Article
    Physician burnout persists in USA. In part, this burnout is believed to be driven by the Electronic Health Record (EHR) and its fraught role in the clinical work of physicians. Artificial intelligence (AI)-enabled healthcare technologies are often promoted on the basis of their promise to reduce burnout by introducing efficiencies into clinical work, particularly related to EHR utilisation and documentation. Where documentation is perceived as the problem, AI scribes are offered as the solution. This essay looks closely at existing studies of AI scribes in clinical context and draws upon experience and understanding of healthcare delivery and the EHR to anticipate how AI may related to provider burnout. The authors find that it is premature to assert that AI tools will reduce physician burnout. Considering the integration of AI scribes into Learning Health Systems healthcare delivery becomes a starting point for understanding the challenges faced in safely adopting AI tools more generally, with attention to the healthcare workforce and patients. The authors found that it is not a foregone conclusion that AI-enabled healthcare technologies, in their current state and application, will lead to improved healthcare delivery and reduced burnout. Instead, this is an open question that demands rigorous evaluation and high standards of evidence before we restructure the work of physicians and redefine the care of our patients.
  12. News Article
    Should health systems tell patients when they’re using AI? UC San Diego Health says yes. The health system uses a generative AI tool from Epic that drafts MyChart patient portal messages for providers. But UC San Diego Health notifies patients when the responses are drafted by AI with the disclosure: “Part of this message was generated automatically and was reviewed and edited by [name of physician],” according to a May 9 NEJM AI article. Members of the organisation’s AI governance committee debated whether it was necessary, as providers use other documentation shortcuts and generative AI could elicit concern from patients, but ultimately came to the same conclusion. “Transparency is necessary, as AI-assisted replies may stand out to patients — especially if they differ from clinicians’ usual communication style,” wrote the authors, UC San Diego Health Chief Medical Information Officer Marlene Millen, MD, Professor Ming Tai-Seale, MD, and Chief Clinical and Innovation Officer Christopher Longhurst, MD. Lack of transparency “could lead to patients questioning the authenticity of the replies, potentially damaging the crucial doctor-patient trust,” the authors wrote. “With tens of thousands of physicians nationwide using AI to support patient communication, now is the time to begin transparent disclosure.” Read full story Source: Becker's Health IT, 12 May 2025
  13. News Article
    New figures from the NHS reveal that 31.4 million GP appointments were delivered in March 2025, a 6.1% increase on the same period last year and nearly 20% more than before the pandemic. This increase, the NHS claims, is due to GP practices adopting digital services to help meet growing demand while ensuring patients are directed to the right care more efficiently. Starting in October, all GP surgeries will be required to offer online appointment requests throughout working hours as part of a new contract, which aims to ease phone line pressure and allow smarter triaging based on medical need. Currently, 99% of GP practices in England have already upgraded their phone systems, expanding capacity and reducing long waits for patients. Professor Bola Owolabi, NHS England’s director of healthcare inequalities, said in a statement: “GP teams are delivering over 30 million appointments a month, up nearly 20% on pre-pandemic levels. Patients can also manage repeat prescriptions and view test results through the NHS App, making care more convenient.” The NHS has also announced that AI is enabling GPs and clinicians to cut the time spent on admin and increase the time and effort expended on patients. Data from AI trials shows an increase in patients seen by A&E, shorter appointments and more time by clinicians spent with the patient. Read full story Source: UK Authority, 30 April 2025
  14. Content Article
    The US Department of Health and Human Services (HHS)’s vision is to be a global leader in innovating and adopting responsible AI to achieve unparalleled advances in the health and well-being of all Americans. This HHS AI Strategic Plan provides a framework and roadmap to ensure that HHS fulfils its obligation to the Nation and pioneers the responsible use of AI to improve people’s lives
  15. News Article
    A groundbreaking artificial intelligence (AI) model is being trained using NHS data from 57 million people in England in the hope it could predict disease and complications before they occur. The world-first study, spearheaded by researchers at University College London (UCL) and King’s College London (KCL), has the potential to “unlock a healthcare revolution”, officials said. The AI, known as Foresight, uses technology similar to that of ChatGPT, however, instead of predicting text, Foresight analyses a patient's medical history to forecast potential future health issues. As part of the pilot, it will be trained using eight routinely collected datasets, including hospital admissions, A&E attendances and Covid-19 vaccination rates, which have been stripped of personal information. “Foresight is a really exciting step towards being able to predict disease and complications before they happen, giving us a window to intervene and enabling a shift towards more preventative healthcare at scale,” Dr Chris Tomlinson of UCL said. Read full story Source: The Independent, 7 May 2025
  16. Content Article
    Orthopaedic surgeon Sunny Deo has spent three decades diagnosing and treating knee joint issues. In this blog, Sunny argues that the healthcare community needs to take a more nuanced approach to diagnosis and decision making so that it can provide patients with safer, more appropriate treatment options. He reflects on why medicine prefers simple answers and looks at how this affects patient care. He goes on to explore how better data collection and the use of artificial intelligence (AI) could provide a more accurate picture of complexity and allow treatment options to be better tailored to individual patients’ needs. "To know the patient that has the disease is more important than to know the disease that the patient has." William Osler, father of modern medicine, 1849-1919. Diagnosis is the process of identifying the nature of an illness or other problem by examining the symptoms and objective findings from investigations. In modern medicine, it is a key focal point of the assessment and management of all patients. A huge amount of clinical medicine training is focused on the art and science of obtaining a diagnosis, and this focus continues into medical practice. The ease of getting to a diagnosis ranges from the glaringly obvious, the so-called ‘spot diagnosis’, through to cases that are very difficult to solve. In between these extremes there is a range from delayed to missed to incorrect diagnosis. The aim of doctors over the centuries has been to work out diagnoses from patients’ symptoms, presenting features (clinical signs) and, in the past century or so, from the evidence of clinical investigations. Quite often, symptoms, signs and investigations produce consistent patterns, and it is these patterns that are taught to medical and other healthcare professionals. This is how diagnoses and outcomes are portrayed in television series or films—just think back to the last episode of Casualty or Grey’s Anatomy you watched. It's also how things often appear in internet searches and on websites and social media. Seeking simple answers to complex questions However, the reality is different. When a patient is sitting in front of me, what I hear and observe may not exactly be what the textbooks, evidence or research tells me I should be seeing. But because we are wired and trained to recognise patterns, we tend to look for diagnoses and solutions that fit within the well-worn narrative. What if the pattern doesn’t fit the actual diagnosis? There are classic presentations for nearly every condition, and these are what you tend to find at the start of a Google search or when using NHS Choices. The expectation of typical symptoms sometimes means we ignore what we might see as annoying variance, superfluous detail or the patient embellishing the truth. This discordance then causes tension with a very basic trait of humans: when we’re faced with a difficult problem, we still seek the simplest solution. This is an evolutionary feature hardwired into us to optimise survival chances. It means we often believe there is a truth to be found that will provide us with a definite answer. From this answer we will come to the best, and ideally only, ‘correct’ solution. Patients who don’t fit the set patterns of diagnosis may then run into trouble when we offer them what is considered to be the ideal treatment. This is an important problem in clinical thinking, language and practice. As a medical community, we tend to create oversimplified approaches based on research that looks for binary answers to complex questions. This research evidence may be based on a small, highly selective ‘typical’ patient cohort, but its findings and conclusions are then translated on to the entire population. This approach results in poor patient outcomes and experience for a small but significant proportion of patients. Pathways designed for ideal diagnoses can cause harm to patients Over my 30 years as an orthopaedic surgeon, 15 as a knee specialist, I have seen that the assessment and treatment of any given condition isn't quite as predictable as we would like it to be. While many patients fit the pattern we are expecting, some do not. I would empirically put the proportion at 60:40, but some unpublished research we did a decade ago suggested the proportion of truly ‘typical’ case presentations for a common condition is much lower. For example, we found that in the case of suspected meniscal tear, this diagnosis actually applied to only 33% of patients with a variety of other diagnoses accounting for the rest. It gets worse when large organisations start to lump patients into a category by condition in a ‘one diagnosis fits all’ strategy. When this approach is taken, there are winners and losers. The winners are those patients whose condition very closely matches the classic presentation of a given condition in isolation. Let’s take the example of knee osteoarthritis—patients with the ‘right type’ of symptoms, physical signs and x-ray changes are generally more likely to do well. Their recovery is more likely to sit within the knowledge base of treating the condition that has evolved over the past half-century. In contrast, patients whose symptoms and test results fall outside of this category may be less likely to do well or recover in the predicted timeframe. This also applies to patients with additional diagnoses or conditions, often termed comorbidities, which may interact, usually in a bad way, with the condition at hand. Failure to consider other diagnoses, either by over-focus on one condition causing wilful ignorance, inattention or lack of attention, may lead to unexpected poor outcomes from a given treatment. It may also mean that the symptoms from the condition that the patient presents with are worse than expected. This doesn’t mean that they won't gain any benefit from a particular treatment, but the risks and potential outcomes may not be communicated adequately by the patient’s healthcare team, if at all. For example, for patients with painful knee osteoarthritis, the current diagnosis to treatment logic runs like this: Knee osteoarthritis is a painful condition. Total knee replacement surgery is a validated safe procedure with significant improvements in quality of life. Other treatment options do not produce as much positive therapeutic benefit compared to total knee replacement surgery. Therefore, total knee replacement surgery is the only treatment for painful knee osteoarthritis. However, there are patients for whom knee replacement surgery is not a safe or practical option, and these patients may benefit from alternative treatments that are not currently offered as they are seen as providing limited benefit. This may be because the participants in trials undertaken over the years had varying diagnoses, meaning that true comparisons of alternative options may have had additional interacting diagnoses or failed to account for differing severity. Understanding the spectrum of complexity As healthcare professionals, we have a duty to diagnose patients as accurately as possible. In orthopaedics, if treatments go wrong or are poorly undertaken, it may lead to prolonged or permanent pain or disability, and we obviously want to avoid this as much as possible. Incomplete identification and documentation of all relevant symptoms and health conditions can potentially lead to an increased risk of treatment failure and complications. Our priority should be to identify these diagnoses or diagnostic clusters as accurately as possible. I think these are basic principles we need to apply to create better systems and improved care for as many patients as possible. In my view, there are grades of ‘atypical patients’ and I have devoted the past decade to trying to demonstrate this, with surprisingly stiff resistance from peer-reviewed journals and funding organisations. I have tried to move away from lumping all patients into a single category. I have done some research on seemingly straightforward soft tissue problems and osteoarthritis in the knee. My initial analysis suggests that we need to collect more detailed and accurate data, rather than simplifying data into minimum datasets. This is where AI can really come into its own, not as a diagnostic tool initially, but as a powerful aid to unlocking and interpreting some of the diagnostic interactions that create problems for patients. However, the use of AI does need to be undertaken with extreme care and consideration, and this isn’t always happening currently. To offer healthcare that is truly person-centred, we need to look beyond our well-worn simple answers and solutions. By using better data and new machine learning tools to understand the nuances of each person’s condition and how it relates to their wider health, we can offer treatment options that are safer, kinder and more cost-effective. Share your views We would love to hear your views on the issues highlighted in Sunny’s blog Are you a clinician who would like to share your experiences? Do these challenges resonate with you? Or are you a patient who has experienced complications because of poor, missed or inadequate diagnosis? Add your comment below (you will need to be a hub member and signed in) or contact us at [email protected] and we can share your story anonymously. Related content on the hub: Using data to improve decision making and person-centred care in surgery: An interview with Sunny Deo and Matthew Bacon Diagnostic errors and delays: why quality investigations are key
  17. Content Article
    This guidance offers high-level information to assist those adopting ambient scribing products that feature Generative Artificial Intelligence (AI), for use across health and care settings in England. These products are sometimes referred to as ambient scribes or AI scribes and include advanced ambient voice technologies (AVTs) used for clinical or patient documentation and workflow support. The guidance is intended for settings aiming to implement a specific product or function of an existing product. 
  18. Content Article
    In today’s digital era, data is generated at an unprecedented scale. Healthcare is no exception— data is produced continuously as a result of our interactions with healthcare organisations - community, acute and tertiary alike. The challenge for healthcare institutions and their governance systems is to utilise this rich healthcare data  effectively and efficiently to improve patient outcomes. Towards this objective, AI is emerging as a key enabling tool. Infection prevention and control (IPC) units have varied work streams - infection surveillance, patient pathway monitoring, novel pathogen intelligence, policy and guidance directives and are best poised to take a leading role in utilising healthcare data to increase the impact of these activities.   IPC is poised to embrace the transformative potential of AI, ensuring its services evolve in step with technological advances.  Achieving this will require a multi-pronged approach to support the parallel development of both AI capabilities and IPC services.
  19. Content Article
    Lord Darzi’s review into the future of the NHS  calls for a “tilt towards technology” to unlock greater productivity. But there’s a hard reality: many parts of the NHS aren’t yet ready to take advantage of what tech has to offer. Meaning, there’s a missed opportunity for trusts to streamline operations and transform care. Only one in five NHS organisations are considered “digitally mature”. And despite lots of progress in the last decade, there are still areas of the NHS relying on paper and non-digital processes. It makes embracing new technologies, such as AI, feel like an unattainable goal – one that goes beyond moving the health service from analogue to digital. As we enter this new era for the NHS, data and digital skills across the workforce will be fundamental to improving patient care, streamlining processes, and making cost savings.
  20. News Article
    Artificial intelligence in healthcare has left experts urging caution that a focus on predictive accuracy over treatment efficacy could lead to patient harm. Researchers in the Netherlands warn that while AI-driven outcome prediction models (OPMs) are promising, they risk creating “self-fulfilling prophecies” due to biases in historical data. OPMs utilise patient-specific information, including health history and lifestyle factors, to assist doctors in evaluating treatment options. AI’s ability to process this data in real time offers significant advantages for clinical decision making. However, the researchers’ mathematical models demonstrate a potential downside, namely, if trained on data reflecting historical disparities in treatment or demographics, AI could perpetuate these inequalities, leading to suboptimal patient outcomes. The study highlights the crucial role of human oversight in AI-driven healthcare. Researchers emphasise the “inherent importance” of applying “human reasoning” to AI’s decisions, ensuring that algorithmic predictions are critically evaluated and do not inadvertently reinforce existing biases. The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models “can lead to harm”. “Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,” researchers said. “We show, however, that using prediction models for decision making can lead to harm, even when the predictions exhibit good discrimination after deployment. “These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.” Read full story Source: The Independent, 12 April 2025
  21. Content Article
    The transition into Industry 5.0 is driven by a bold ambition: to transform organisations worldwide through technological advancements. As leaders navigate this era of unprecedented digital transformation, they must confront the challenges posed by emerging technologies as they begin to impact traditional work structures, employee productivity, organisational security and workforce dynamics. The rise of generative artificial intelligence (AI), digital tools and other technological innovations brings both promise and confusion. But it also presents an unparalleled opportunity: to redefine the role of human labour. To effect real change, leaders must meet this moment head on. By shifting from routine, repetitive tasks to higher-order, creative and strategic roles, organisations can unlock innovation, value and progress on a transformative scale, This report explores the importance of a human-centred approach to technological adoption. It emphasises the need for leaders to prepare their workforces for disruption, integrate AI as a force for good and embrace their ethical responsibilities in guiding their organisations forward. Additionally, it provides actionable insights into effective change management strategies, equipping leaders to navigate the complexities of AI and technological transformation with confidence and purpose.
  22. Content Article
    Live stream recording of Day 1 of the 7th Global Summit on Patient Safety, organised by the Department of Health of the Republic of the Philippines and co-sponsored by the World Health Organization (WHO). This event focuses on advancing international efforts to improve healthcare quality and safeguard patients worldwide. It brings together global leaders, experts and stakeholders to discuss and shape the future of patient safety.  Advancing Patient Safety Reporting and Learning Systems can be found at 2:46:57 Plenary 3 on AI and health can be found at 08:05:10 Related reading on the hub: 15 hub top picks for the 7th Global Ministerial Summit for Patient Safety
  23. News Article
    Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. However, the current trend of assistive AI implementation could actually worsen challenges related to error prevention and physician burnout, according to a new brief published in JAMA Health Forum. The brief, written by researchers from the Johns Hopkins Carey Business School, Johns Hopkins Medicine, and The University of Texas at Austin McCombs School of Business, explains that there is an increasing expectation of physicians to rely on AI to minimize medical errors. However, proper laws and regulations are not yet in place to support physicians as they make AI-guided decisions, despite the fierce adoption of these technologies among healthcare organisations. The researchers predict that medical liability will depend on whom society considers at fault when the technology fails or makes a mistake, subjecting physicians to an unrealistic expectation of knowing when to override or trust AI. The authors warn that such an expectation could increase the risk of burnout and even errors among physicians. "AI was meant to ease the burden, but instead, it’s shifting liability onto physicians - forcing them to flawlessly interpret technology even its creators can’t fully explain," said Shefali Patil, visiting associate professor at the Carey Business School and associate professor at the University of Texas McCombs School of Business. "This unrealistic expectation creates hesitation and poses a direct threat to patient care." Read full story Source: Digital Health News, 26 March 2025
  24. Content Article
    The 7th Global Ministerial Summit for Patient Safety, organised by the Department of Health of the Republic of the Philippines and co-sponsored by the World Health Organization (WHO), takes place on 3-4 April 2025 in Manila. This event focuses on advancing international efforts to improve healthcare quality and safeguard patients worldwide. It brings together global leaders, experts and stakeholders to discuss and shape the future of patient safety.  Global Ministerial Summits on Patient Safety aim to drive forward the global patient safety movement. Beginning in 2016, they have helped to keep patient safety high on policy makers’ agendas and helped the build the momentum needed to create the first World Health Organization (WHO) Global Patient Safety Action Plan, published in August 2021. This year’s Summit in Manilla seeks to support the implementation of the Global Patient Safety Action Plan, embracing the theme "Weaving Strengths for the Future of Patient Safety Throughout the Healthcare Continuum." The event highlights the current implementation progress, showcasing diverse approaches and strategic plans adopted by countries. The Summit will include discussions around: The role of patient engagement in bridging patient safety gaps. Diagnostic safety. Leveraging artificial intelligence (AI) and technology for patient safety. Creating psychologically safe and healthy workplaces. Investing in patient safety for sustainable healthcare. There will be sessions across the two days looking at each of these issues, within the broader context of integrating patient safety in all aspects of healthcare delivery and at all levels of care as a foundation of resilient and sustainable healthcare systems. To support the Global Ministerial Summit, Patient Safety Learning has pulled together some key resources from the hub around these key themes being discussed at the Summit. Patient engagement 1 WHO: Patient safety rights charter The Patient safety rights charter is a key resource intended to support the implementation of the Global Patient Safety Action Plan 2021–2030: Towards eliminating avoidable harm in health care. The Charter aims to outline patients’ rights in the context of safety and promotes the upholding of these rights, as established by international human rights standards, for everyone, everywhere, at all times. 2 Championing the patient voice: a recent discussion with the Patient Safety Commissioner at the Patient Safety Partners Network The role of Patient Safety Commissioner for England was created by the UK Government after a recommendation from the Independent Medicines and Medical Devices Safety Review, chaired by Baroness Julia Cumberlege. The Patient Safety Commissioner acts as a champion for patients, leading a drive to improve the safety of medicines and medical devices. This blog provides an overview of a Patient Safety Partners Network meeting where members were joined by Professor Henrietta Hughes, Patient Safety Commissioner for England. 3 Providing patient-safe care begins with asking and listening... really listening! Dan Cohen is an international consultant in patient safety and clinical risk management, and a Trustee for Patient Safety Learning. In this blog, Dan talks about how patient-safe care is all about collaborating and listening to your patients to find out what really matters to them. He illustrates this in a case study of his own personal experience whilst working as a clinician in the USA. Diagnostic safety 4 The economics of diagnostic safety Diagnosis is complex and iterative, therefore liable to error in accurately and timely identifying underlying health problems, and communicating these to patients. Up to 15% of diagnoses are estimated to be inaccurate, delayed or wrong. Diagnostic errors negatively impact patient outcomes and increase use of healthcare resources. This Health Working Paper from the Organisation for Economic Co-operation and Development (OECD) defines the scope of diagnostic error and illustrates the burden of diagnostic error in commonly diagnosed conditions. It also estimates the direct costs of diagnostic error and provides policy options to improve diagnostic safety. 5 Improving diagnostic safety in surgery: A blog by Anna Paisley Good outcomes for surgical patients require accurate, timely and well-communicated diagnoses. In this blog, Anna Paisley, a Consultant Upper GI Surgeon, talks about the challenges to safe surgical diagnosis and shares some of the strategies available to mitigate these challenges and aid safer, more timely diagnosis. 6 How early diagnosis saves lives: case study on aortic dissection In this blog, The Aortic Dissection Charitable Trust explains why timely and accurate diagnosis of aortic dissection is critical for saving lives. By sharing Martin’s recovery story, they illustrate the positive impact of prompt testing and treatment. The blog highlights the need to improve patient safety relating to aortic dissection, calling for increased education and awareness among healthcare professionals; improved clinical guidelines and protocols; and heightened vigilance in recognising and responding to the symptoms of aortic dissection. Artificial intelligence (AI) and technology 7 Patient Safety and Artificial Intelligence: Opportunities and Challenges for Care Delivery (IHI Lucian Leape Institute) In January 2024, the Institute for Healthcare Improvement (IHI) Lucian Leape Institute convened an expert panel to explore the promise and potential risks for patient safety from generative artificial intelligence (genAI). The report that followed summarises three user cases that highlight areas where genAI could significantly impact patient safety: in documentation support, clinical decision support and patient-facing chatbots. 8 AI in healthcare translation: balancing risk with opportunity In an increasingly global healthcare environment, with patients and professionals from many different cultural and linguistic backgrounds, precision in medical document translation is key. In this blog, Melanie Cole, Translations Coordinator at EIDO Systems International, talks about the challenges, risks and opportunities for using AI in healthcare translation. 9 Integrated human-centred AI in clinical practice: A guide for health and social care professionals This is a guide for designers, developers and users of AI in healthcare. It outlines general principles health and social care professionals should consider, a case study drawn from clinical practice and a directory of resources to find out more. It includes key questions that clinicians and AI developers need to answer together to ensure the best possible outcomes. It follows on from the CIEHF's White Paper, Human Factors in Healthcare AI, which sets out a human factors perspective on the use of AI applications in healthcare. Psychological safety 10 Speak up for Safety: A new workshop for healthcare staff about the importance of Just Culture The culture of a healthcare organisation can determine how safe its staff members feel to raise concerns about patient safety. Bella Knaapen, Surgical Support Governance & Risk Management Facilitator and Sarah Leeks, Senior Health & Wellbeing Practitioner at Norfolk and Norwich University Hospitals NHS Foundation Trust, have developed ‘Speak Up For Safety’, a Just Culture training workshop that aims to help staff, at all levels, understand the importance of creating an environment that encourages people to share concerns and feedback. 11 Balancing care: The psychological impact of ensuring patient safety In this blog, Leah Bowden, a patient safety specialist, reflects on the impact her job has on her mental health and family life. She discusses why there needs to be specialised clinical supervision for staff involved in reviewing patient safety incidents and how organisations need to come together to identify ways we can support our patient safety teams. 12 Amy Edmonson: The importance of psychological safety As a leader how can you foster a work environment where people feel safe to speak up, share new ideas and work in innovative ways? In this video from the Kings Fund, Amy Edmondson, Novartis Professor of Leadership and Management at the Harvard Business School, talks about the importance of psychological safety in health and care and what leaders can do to create it. Sustainability 13 The Royal College of Surgeons of Edinburgh: Green Theatre Checklist Healthcare services globally have a large carbon footprint, accounting for 4-5% of total carbon emissions. Surgery is particularly carbon intensive, with a typical single operation estimated to generate between 150-170kgCO2e, equivalent to driving 450 miles in an average petrol car. The UK and Ireland surgical colleges have recognised that it is imperative for us to act collectively and urgently to address this issue. The Royal College of Surgeons of Edinburgh have collated a compendium of peer-reviewed evidence, guidelines and policies that inform the interventions included in the Intercollegiate Green Theatre Checklist. This compendium should support members of the surgical team to introduce changes in their own operating departments. 14 Communicating on climate change and health: Toolkit for health professionals Communicating the health risks of climate change and the health benefits of climate solutions is both necessary and helpful. Health professionals are well-placed to play a unique role in helping their communities understand climate change, protect themselves, and realize the health benefits of climate solutions. This toolkit from WHO aims to help health professionals effectively communicate about climate change and health. 15 Climate change: why it needs to be on every Trust's agenda The NHS has declared climate change a health emergency, but are trust leaders and healthcare staff talking and acting on this? Angela Hayes, Clinical Lead Sustainability at the Christie Foundation Trust and a hub Topic leader, discusses climate change and the impact it has on all of our lives and health. She believes healthcare professionals have a moral duty to act, to protect and improve public health, and should demand stronger action in tackling climate change. If you would like to write a blog or have a resource to share on any of the themes highlighted in this blog, please get in touch. Contact the hub team at [email protected] to discuss further.
  25. News Article
    Ministers have cut millions of pounds of funding for potentially life saving AI cancer technology in England, which cancer experts warn will increase waiting times and could cause more patients to die. Contouring is used in radiotherapy to ensure treatment is as effective and safe as possible. The tumour and normal tissue is “mapped” or contoured on to medical scans, to ensure the radiation targets the cancer while minimising damage to healthy tissues and organs. Normally, this is a slow, manual process that can take doctors between 20 and 150 minutes to complete. AI auto-contouring takes less than five minutes and costs around £10-£15 per patient. Research shows that AI contouring can cut waits for radiotherapy by more than five days for breast cancer patients, up to nine days for prostate cancer patients and three days for lung cancer patients. In May 2024, the Conservative government announced £15.5m over three years to fund AI auto-contouring for all hospitals providing radiotherapy. Work continued on the scheme after the general election, with online webinars and follow-up calls for radiotherapy departments held in September. The 51 trusts offering radiotherapy continued to work on installing the cloud-based technology, with a number using it early, in the belief the funding was secured. But in February, in an email seen by the Guardian, Nicola McCulloch, the deputy director of specialised commissioning at NHS England, said the funding had been cancelled “due to a need to further prioritise limited investment”. There would no longer be a centrally funded programme to support implementation of the technology, she said. The decision means many radiotherapy departments face a return to manual contouring, prompting accusations that the government is ditching digital and going back to analogue cancer care. Analysis by Radiotherapy UK has calculated that removing funding for AI contouring in England will add up to 500,000 extra days to waiting lists for breast, prostate and lung cancer alone and leave each of the 51 trusts with a £300,000 shortfall. The chair of Radiotherapy UK, Prof Pat Price, said: “The government cannot laud the advent of AI in one breath, and allow this to happen. Far from moving from an analogue to digital NHS, when it comes to radiotherapy it feels like the opposite is happening. This wrong-footed decision will exacerbate the impact of severe staff shortages.” The leading oncologist urged ministers to intervene. “Some departments are so short-handed that they’re shutting machines down because no one is there to operate them and nationally, radiotherapy vacancy rates are running at 8%. This investment in AI could have alleviated some of these pressures. Without it, cancer patients will wait longer than necessary for treatment, potentially costing their lives.” Read full story Source: The Guardian, 31 March 2025
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.