Jump to content

Search the hub

Showing results for tags 'AI'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient Safety Partners
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • Patient Safety Standards
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training & education
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous

News

  • News

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


Join a private group (if appropriate)


About me


Organisation


Role

Found 158 results
  1. Content Article
    Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification and allocation of resources. However, bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritised groups and other historically marginalised populations such as individuals with lower incomes. This study aimed to provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms, in order to promote health and health care equity. The authors suggested five guiding principles: Promote health and health care equity during all phases of the health care algorithm life cycle Ensure health care algorithms and their use are transparent and explainable Authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness Explicitly identify health care algorithmic fairness issues and trade-offs Establish accountability for equity and fairness in outcomes from health care algorithms.
  2. Content Article
    This systematic review conducted for the Agency for Healthcare Research and Quality (AHRQ) aimed to examine the evidence on whether and how healthcare algorithms exacerbate, perpetuate or reduce racial and ethnic disparities in access to healthcare, quality of care and health outcomes. It also examined strategies that mitigate racial and ethnic bias in the development and use of algorithms. The results showed that algorithms potentially perpetuate, exacerbate and sometimes reduce racial and ethnic disparities. Disparities were reduced when race and ethnicity were incorporated into an algorithm to intentionally tackle known racial and ethnic disparities in resource allocation (for example, kidney transplant allocation) or disparities in care (for example, prostate cancer screening that historically led to Black men receiving more low-yield biopsies).
  3. Content Article
    There is a direct correlation between safety event management practices and care quality outcomes. The right safety management tools, supported by a shared perception and tolerance of risk, will help organisations go beyond reporting event data to improve safety culture.
  4. Event
    until
    Together with the Türkiye Health Care Quality and Accreditation Institute (TUSKA) and the Ministry of Health, Türkiye, ISQua is delighted to host their 40th International Conference in Istanbul. The theme for the 2024 conference is 'Health for People and Planet: Building Bridges to a Sustainable Future'. It will address the continued challenges of making person-centred care part of the healthcare system, as well as addressing some of the hot topics that matter most in a rapidly changing world. Issues such as environmental challenges, reducing the healthcare sector's carbon footprint, and ensuring the long-term resilience of healthcare will be addressed at the conference. It will also examine the potentials and pitfalls of AI and Digital Transformation in healthcare, and how it can revolutionise healthcare and enable better patient engagement. Further information
  5. Content Article
    The Medicines and Healthcare products Regulatory Agency (MHRA) has published a roadmap which outlines the intended timelines for delivering the future regulatory framework for medical devices.
  6. Event
    until
    PPPs 2024 Cancer Care programme kicks off with this report launch webinar on AI in Imaging Diagnostics. While discussions concerning artificial intelligence (AI) have come to dominate public discourse since the launch of ChatGPT last year, in healthcare, AI has been the subject of intense debate for some time. Many of the key talking points that define the debate in healthcare echo that of its wider implications, namely the unintended consequences of unleashing unregulated algorithms across the sector and the potentially profound implications AI could have upon workforces globally. However, it is perhaps in healthcare where AI stands to make its greatest and most positive impact. Healthcare is a data-rich industry, with the treatment of patients leading to the production of vast amounts of medical records, images, lab results, and numerous other data outputs. This multimodal data can be used to train a wide range of AI systems, leading to the development of new, more targeted drug treatments and diagnostic tools, more personalised care, and a more efficient healthcare system. Join an expert panel as they help to launch PPPs newest report exploring what it takes to begin implementing AI at scale in imaging diagnostics in the NHS. Register for the webinar
  7. Content Article
    Large language models such as OpenAI's GPT-4 have the potential to transform medicine by enabling automation of a range of tasks, including writing discharge summaries, answering patient questions, and supporting clinical treatment planning. These models are so useful that their adoption has been immediate, and efforts are already well underway to integrate them with ubiquitous clinical information systems. However, the unchecked use of the technology has the potential to cause harm. In this article for The Lancet, Janna Hastings looks at the need to mitigate racial and gender bias in language models that may be used in healthcare settings.
  8. Content Article
    Hospitals are complex adaptive systems. They are industrial environments where it isn't always possible to expect predictable responses to inputs. Patient safety management practices need to adapt to align with the environment in which events occur. It is time to reimagine safety event reporting and management solutions that guide, not prescribe, investigations and improvement actions.
  9. News Article
    A hospital has introduced a new artificial intelligence system to help doctors treat stroke patients. The RapidAI software was recently used for the first time at Hereford County Hospital. It analyses patients' brain images to help decide whether they need an operation or drugs to remove a blood clot. Wye Valley NHS Trust, which runs the hospital, is the first in the West Midlands to roll out the software. Jenny Vernel, senior radiographer at the trust, said: “AI will never replace the clinical expertise that our doctors and consultants have. "But harnessing this latest technology is allowing us to make very quick decisions based on the experiences of thousands of other stroke patients.” Radiographer Thomas Blackman told BBC Hereford and Worcester that it usually takes half an hour for the information to be communicated. He said the new AI-powered system now means it is "pinged" to the relevant teams' phones via an app in a matter of minutes. "It's improved the patient pathway a lot," he added. Read full story Source: BBC News, 7 December 2023
  10. Content Article
    New developments in artificial intelligence (AI) are extensively discussed in public media and scholarly publications. While in many academic disciplines debates on the challenges and opportunities of AI and how to best address them have been launched, the human factors and ergonomics (HFE) community has been strangely quiet. In this paper, Gudela Grote discusses three main areas in which HFE could and should significantly contribute to the socially and economically viable development and use of AI: decisions on automation versus augmentation of human work; alignment of control and accountability for AI outcomes; counteracting power imbalances among AI stakeholders. She then outlines actions that the HFE community could undertake to improve their involvement in AI development and use, foremost translating ethical into design principles, strengthening the macro-turn in HFE, broadening the HFE design mindset, and taking advantage of new interdisciplinary research opportunities.
  11. Event

    IHI Forum

    Sam
    until
    The IHI Forum is a four-day conference that has been the home of quality improvement in health care for more than 30 years. Dedicated improvement professionals from across the globe will be convening to tackle health care's most pressing challenges: improvement capability, patient and workforce safety, equity, climate change, artificial intelligence, and more. Register
  12. News Article
    Artificial intelligence could be used to predict if a person is at risk of having a heart attack up to 10 years in the future, a study has found. The technology could save thousands of lives while improving treatment for almost half of patients, researchers at the University of Oxford said. The study, funded by the British Heart Foundation (BHF), looked at how AI might improve the accuracy of cardiac CT scans, which are used to detect blockages or narrowing in the arteries. Prof Charalambos Antoniades, chair of cardiovascular medicine at the BHF and director of the acute multidisciplinary imaging and interventional centre at Oxford, said: “Our study found that some patients presenting in hospital with chest pain – who are often reassured and sent back home – are at high risk of having a heart attack in the next decade, even in the absence of any sign of disease in their heart arteries. “Here we demonstrated that providing an accurate picture of risk to clinicians can alter, and potentially improve, the course of treatment for many heart patients.” Read full story Source: The Guardian, 13 November 2023
  13. Content Article
    Structural, economic and social factors can lead to inequalities in the length of time people wait for NHS planned hospital care – such as hip or knee operations – and their experience while they wait. In 2020, after the first wave of the Covid-19 pandemic, NHS England asked NHS trusts and systems to take an inclusive approach to tackling waiting lists by disaggregating waiting times by ethnicity and deprivation to identify inequalities and to take action in response. This was an important change to how NHS organisations were asked to manage waiting lists – embedding work to tackle health inequalities into the process. Between December 2022 and June 2023, the King’s Fund undertook qualitative case studies about the implementation of this policy in three NHS trusts and their main integrated care boards (ICBs), and interviewed a range of other people about using artificial intelligence (AI) to help prioritise care. It also reviewed literature, NHS board papers and national waiting times data. The aim was to understand how the policy was being interpreted and implemented locally, and to extract learning from this. It found work was at an early stage, although there were examples of effective interventions that made appointments easier to attend, and prioritised treatment and support while waiting. Reasons for the lack of progress included a lack of clarity about the case for change, operational challenges such as poor data, cultural issues including different views about a fair approach, and a lack of accountability for the inclusive part of elective recovery. Taking an inclusive approach to tackling waiting lists should be a core part of effective waiting list management and can contribute to a more equitable health system and healthier communities. Tackling inequalities on waiting lists is also an important part of the NHS’s wider ambitions to address persistent health inequalities. But to improve the slow progress to date, NHS England, ICBs and trusts need to work with partners to make the case for change, take action and hold each other to account.
  14. Content Article
    This document from the Patient Experience Library aims to map the evidence base for patient experience in digital healthcare. We shine a spotlight on areas of saturation, we expose the gaps and we make suggestions for how research funders and national NHS bodies could steer the research to get better value and better learning.
  15. Content Article
    Artificial intelligence, as a nonhuman entity, is increasingly used to inform, direct, or supplant nursing care and clinical decision-making. The boundaries between human- and nonhuman-driven nursing care are blurred with the advent of sensors, wearables, camera devices, and humanoid robots at such an accelerated pace that the critical evaluation of its influence on patient safety has not been fully assessed. Since the pivotal release of To Err is Human, patient safety is being challenged by the dynamic healthcare environment like never before, with nursing at a critical juncture to steer the course of artificial intelligence integration in clinical decision-making. This paper presents an overview of artificial intelligence and its application in healthcare and highlights the implications which affect nursing as a profession, including perspectives on nursing education and training recommendations. The legal and policy challenges which emerge when artificial intelligence influences the risk of clinical errors and safety issues are discussed.
  16. News Article
    ChatGPT , the artificial intelligence tool, may be better than a doctor at following recognised treatment standards for depression, and without the gender or social class biases sometimes seen in the physician-patient relationship, a study suggests. The findings were published in Family Medicine and Community Health. The researchers said further work was needed to examine the risks and ethical issues arising from AI’s use. Globally, an estimated 5% of adults have depression, according to the World Health Organization. Many turn first to their GP for help. Recommended treatment should largely be guided by evidence-based clinical guidelines in line with the severity of the depression. ChatGPT has the potential to offer fast, objective, data-based insights that can supplement traditional diagnostic methods as well as providing confidentiality and anonymity, according to researchers from Israel and the UK. Read full story Source: The Guardian, 16 October 2023
  17. Content Article
    This is part of our series of Patient Safety Spotlight interviews, where we talk to people working for patient safety about their role and what motivates them. Ashley talks to us about the need to professionalise patient safety roles while also upskilling frontline healthcare staff to improve patient safety, describing the role that professional coaching can play. He also discusses the challenges we face in understanding how AI affects decision making in healthcare and how it could contribute to patient safety incidents.
  18. News Article
    Brain surgery using artificial intelligence could be possible within two years, making it safer and more effective, a leading neurosurgeon says. Trainee surgeons are working with the new AI technology, to learn more precise keyhole brain surgery. Developed at University College London, it highlights small tumours and critical structures such as blood vessels at the centre of the brain. The government says it could be "a real game-changer" for healthcare in the UK. Brain surgery is precise and painstaking - straying a millimetre the wrong way could kill a patient instantly. Avoiding damaging the pituitary gland, the size of a grape, at the centre of the brain, is critical. It controls all the body's hormones - and any problems with it can cause blindness. Read full story Source: BBC News, 28 September 2023
  19. News Article
    ChatGPT could be used to diagnose patients in a bid to reduce waiting times in emergency departments, researchers have suggested. It comes after a study found the language model, powered by artificial intelligence (AI), “performed well” in generating a list of diagnoses for patients and suggesting the most likely option. Researchers in the Netherlands entered the records of 30 patients who visited an emergency department in 2022, as well as anonymous doctors’ notes, into ChatGPT versions 3.5 and 4.0. The AI analysis was compared to two clinicians who made a diagnosis based on the same information, both with and without laboratory data. When lab data was included, doctors had the correct answer in their top five differential diagnoses in 87% of cases, compared with 97% for ChatGPT 3.5 and 87% for ChatGPT 4.0. There was a 60% overlap between the differential diagnoses by clinicians and ChatGPT. The team said that while ChatGPT was “able to suggest medical diagnoses much like a human doctor would”, more work is needed before it is applied in the real world. Read full story Source: The Independent, 13 September 2023
  20. Content Article
    The NHS.uk website averaged over 2,000 visitors per minute in 2022 and, while websites are hardly considered cutting edge, this technology is important to help make trusted and reliable health and care knowledge easily accessible to patients and the public. Web-based information, alongside access to medical records and personalised care initiatives, means people are potentially more informed to make decisions and be actively involved in their own care. However simply having access to information doesn’t necessarily make it useable.
  21. Event
    until
    Developing trust when it comes to the employment of AI-driven healthcare is a complex challenge, and one that’s easy to get wrong. Daniel Morris, Partner at Bevan Brittan, Mahesh Hariharan, Founder and CEO of Zupervise, and Surabhi Srivastava, Commercial VP of Qure.ai, will together explore the importance of trust in AI-driven healthcare, and how effective governance can help build trust between patients & providers. They will discuss topics such as: data provenance; algorithmic transparency; and the role of human oversight in ensuring patient safety and data security. Register
  22. Content Article
    Delirium is a common but underdiagnosed state of disturbed attention and cognition that afflicts one in four older hospital inpatients. It is independently associated with a longer length of hospital stay, mortality, accelerated cognitive decline and new-onset dementia. Risk stratification models enable clinicians to identify patients at high risk of an adverse event and intervene where appropriate. The advent of wearables, genomics, and dynamic datasets within electronic health records (EHRs) provides big data to which machine learning (ML) can be applied to individualise clinical risk prediction. ML is a subset of artificial intelligence that uses advanced computer programmes to learn patterns and associations within large datasets and develop models (or algorithms), which can then be applied to new data in rapidly producing predictions or classifications, including diagnoses. The objectives of this review from Strating et al. were to: (1) provide a more contemporary overview of research on all ML delirium prediction models designed for use in the inpatient setting; (2) characterise them according to their stage of development, validation and deployment; and (3) assess the extent to which their performance and utility in clinical practice have been evaluated.
  23. Content Article
    Healthcare is where the "most exciting" opportunities for artificial intelligence (AI) lie, an influential MP has said, but is also an area where the technology's major risks are illustrated. Greg Clark, chairman of the Commons Science, Innovation and Technology Committee (SITC), said the wider adoption of AI in healthcare would have a "positive impact", but urged policy makers to "consider the risks to safety". He said: "If we're to gain all the advantages, we have to anticipate the risks and put in place measures to safeguard against that." An interim report published by the Science, Innovation and Technology Committee sets out the Committee’s findings from its inquiry so far, and the twelve essential challenges that AI governance must meet if public safety and confidence in AI are to be secured.
  24. Content Article
    While there is much potential and promise for the use of artificial intelligence in improving the safety and efficiency of health systems, this can at times be weakened by a narrow technology focus and by a lack of independent real-world evaluation. It should be expected that when AI is integrated into health systems, challenges to safety will emerge, some old, and some novel. In this chapter of the book Safety in the Digital Age: Sociotechnical Perspectives on Algorithms and Machine Learning, Mark Sujan argues that to address these issues, a systems approach is needed for the design of AI from the outset. He draws on two examples to help illustrate these issues: Design of an autonomous infusion pump and Implementation of AI in an ambulance service call centre to detect out-of-hospital cardiac arrest.
  25. News Article
    The use of artificial intelligence in breast cancer screening is safe and can almost halve the workload of radiologists, according to the world’s most comprehensive trial of its kind. Breast cancer is the most prevalent cancer globally, according to the World Health Organization, with more than 2.3 million women developing the disease every year. Screening can improve prognosis and reduce mortality by spotting breast cancer at an earlier, more treatable stage. Preliminary results from a large study suggest AI screening is as good as two radiologists working together, does not increase false positives and almost halves the workload. The interim safety analysis results of the first randomised controlled trial of its kind involving more than 80,000 women were published in the Lancet Oncology journal. Read full story Source: Guardian 2 August 2023
×
×
  • Create New...