Jump to content

Search the hub

Showing results for tags 'AI'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient Safety Partners
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • Patient Safety Standards
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training & education
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous

News

  • News

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


Join a private group (if appropriate)


About me


Organisation


Role

Found 157 results
  1. Content Article
    One of the areas where Human Factors is getting more traction is within the healthcare sector. It is still a slow burner though with lots more work to be done, and this is getting more urgent as new technologies are available to make procedures and processes better and potentially support more effective patient outcomes. Dr Mark Sujan has taken this challenge head on by launching the Artificial Intelligence and Digital Health Special Interest Group with the CIEHF. In this podcast, we find out more about Mark and his motivations, as well as what his intentions for the Special Interest Group are.
  2. Content Article
    Researchers have developed an artificial intelligence (AI) tool for rapidly detecting COVID-19 in people arriving at a hospital’s emergency department. The tool can accurately rule out infection within an hour of a patient arriving at hospital, significantly faster than the PCR (polymerase chain reaction) test that has a turnaround time of typically 24 hours. 
  3. Content Article
    Healthcare is becoming both increasingly data driven and automated. Authors of this blog, published by the London School of Economics, found that opportunities for patients to influence and inform these future technologies are often lacking, which in turn may heighten disillusionment and lack of trust in them. As such, they propose four priorities for new data driven technologies to ensure they are ethical, effective and equitable for diverse patient groups: Public voice Individual’s diversity Participatory co-design Open knowledge development and exchange. Read the blog in full via the link below.
  4. Content Article
    Artificial intelligence tools and deep learning models are a powerful tool in cancer treatment. They can be used to analyse digital images of tumour biopsy samples, helping physicians quickly classify the type of cancer, predict prognosis and guide a course of treatment for the patient. However, unless these algorithms are properly calibrated, they can sometimes make inaccurate or biased predictions, as Howard et al. demonstrate in this study.
  5. Content Article
    Although most current medication error prevention systems are rule-based, these systems may result in alert fatigue because of poor accuracy. Previously, we had developed a machine learning (ML) model based on Taiwan’s local databases (TLD) to address this issue. However, the international transferability of this model is unclear. This study examines the international transferability of a machine learning model for detecting medication errors and whether the federated learning approach could further improve the accuracy of the model. It found that the ML model has good international transferability among US hospital data. Using the federated learning approach with local hospital data could further improve the accuracy of the model.
  6. Content Article
    Reports from the G7 working groups on AI governance and interoperability setting out how the G7 are implementing their commitments on digital health.
  7. Content Article
    The theme for the 4th Learning from Excellence Community Event was “Being better, together”, reflecting LfE's aspiration to grow as individuals, and as part of a community, through focussing on what works. For this event, LfE partnered with the Civility Saves Lives (CSL) team, who promote the importance of kindness and civility at work and seek to help us to address the times this is lacking in a thoughtful and compassionate way, through their Calling it out with Compassion programme.
  8. Event
    until
    As the adoption of artificial intelligence (AI) in health and care continues to progress rapidly, it's essential that clinicians ensure this technology is used for the benefit of patients and to assist us in providing equitable and high-quality care both now and in the future. However, it's also crucial that we are aware of the potential risks and unintended consequences of using AI. This month, the RSM will delve into the development of machine learning (ML) and AI and their applications to healthcare. It will also debate the need for ethical guidelines and regulation in this field. By attending this event, you will understand: What is machine learning and artificial intelligence. How AI is being currently applied to healthcare and the potential future uses. How data drives AI and the potential bias within the data. The way ML and AI can lead to errors and harm. The ethical issues surrounding the use of AI in healthcare. The need for regulations and governance, both in healthcare and the broader society. Register
  9. Community Post
    NHS hospital staff spend countless hours capturing data in electronic prescribing and medicines administration systems. Yet that data remains difficult to access and use to support patient care. This is a tremendous opportunity to improve patient safety, drive efficiencies and save time for frontline staff. I have just published a post about this challenge and Triscribe's solution. I would love to hear any comments or feedback on the topic... How could we use this information better? What are hospitals already doing? Where are the gaps? Thanks
  10. Community Post
    Subject: Looking for Clinical Champions (Patient Safety Managers, Risk Managers, Nurses, Frontline clinical staff) to join AI startup Hello colleagues, I am Yesh. I am the founder and CEO of Scalpel. <www.scalpel.ai> We are on a mission to make surgery safer and more efficient with ZERO preventable incidents across the globe. We are building an AI (artificially intelligent) assistant for surgical teams so that they can perform safer and more efficient operations. (I know AI is vaguely used everywhere these days, to be very specific, we use a sensor fusion approach and deploy Computer Vision, Natural Language Processing and Data Analytics in the operating room to address preventable patient safety incidents in surgery.) We have been working for multiple NHS trusts including Leeds, Birmingham and Glasgow for the past two years. For a successful adoption of our technology into the wider healthcare ecosystem, we are looking for champion clinicians who have a deeper understanding of the pitfalls in the current surgical safety protocols, innovation process in healthcare and would like to make a true difference with cutting edge technology. You will be part of a collaborative and growing team of engineers and data scientists based in our central London office. This role is an opportunity for you to collaborate in making a difference in billions of lives that lack access to safe surgery. Please contact me for further details. Thank you Yesh yesh@scalpel.ai
  11. Content Article
    Healthcare is where the "most exciting" opportunities for artificial intelligence (AI) lie, an influential MP has said, but is also an area where the technology's major risks are illustrated. Greg Clark, chairman of the Commons Science, Innovation and Technology Committee (SITC), said the wider adoption of AI in healthcare would have a "positive impact", but urged policy makers to "consider the risks to safety". He said: "If we're to gain all the advantages, we have to anticipate the risks and put in place measures to safeguard against that." An interim report published by the Science, Innovation and Technology Committee sets out the Committee’s findings from its inquiry so far, and the twelve essential challenges that AI governance must meet if public safety and confidence in AI are to be secured.
  12. Content Article
    While there is much potential and promise for the use of artificial intelligence in improving the safety and efficiency of health systems, this can at times be weakened by a narrow technology focus and by a lack of independent real-world evaluation. It should be expected that when AI is integrated into health systems, challenges to safety will emerge, some old, and some novel. In this chapter of the book Safety in the Digital Age: Sociotechnical Perspectives on Algorithms and Machine Learning, Mark Sujan argues that to address these issues, a systems approach is needed for the design of AI from the outset. He draws on two examples to help illustrate these issues: Design of an autonomous infusion pump and Implementation of AI in an ambulance service call centre to detect out-of-hospital cardiac arrest.
  13. Content Article
    Patient satisfaction surveys rely largely on numerical ratings, but applying artificial intelligence (AI) to analyse respondents’ free-text comments can yield deeper insights. AI presents the ability to reveal insights from large sets of this type of unstructured data. The authors’ analysis here presents AI-enabled insights into what different racial and ethnic groups of patients say about physicians’ courtesy and respect. This analysis illustrates one method of leveraging AI to improve the quality and value of care.
  14. Community Post
    Artificial Intelligence is creating a lot of buzz in the US and around the world. This perspective from the US site AHRQ Patient Safety Net explores a range of issues that could affect the uptake artificial intelligence systems in health care. What do hub members think? Are we destined to encounter Hal (from 2001: a Space Odyssey) or Samantha (from Her)? Emerging safety issues in artificial intelligence
  15. Content Article
    Generative AI is being heralded in the medical field for its potential to ease the burden of medical documentation by generating visit notes, treatment codes and medical summaries. Doctors and patients might also turn to generative AI to answer medical questions about symptoms, treatment recommendations or potential diagnoses. This article in JAMA Network looks at the liability implications of using AI to generate health information, highlighting that no court in the US has yet considered the question of liability for medical injuries caused by relying on AI-generated information.
  16. Content Article
    NHS hospital staff spend countless hours capturing data in electronic prescribing and medicines administration systems. Yet that data remains difficult to access and use to support patient care. This is a tremendous opportunity to improve patient safety, drive efficiencies and save time for frontline staff. In this blog, Kenny Fraser, CEO of Triscribe, explains why we need to deliver quick, low-cost improvement using modern, open source software tools and techniques. We don’t need schemes and standards or metrics and quality control. The most important thing is to build software for the needs and priorities of frontline pharmacists, doctors and nurses.
  17. Content Article
    What exactly is machine learning and how is it being used in healthcare? Are machines always better than a person? How do we know? In this interview, Patient Safety managing editor, Caitlyn Allen asks these questions of artificial intelligence healthcare researcher Dr Avishek Choudhury.
  18. Content Article
    When people don't feel their actions will make a difference because of the vast scale of a problem, they are less likely to act, and this has implications for attempts to improve patient safety and reduce avoidable harm. In this article, Brian Resnick, science and health editor at Vox, interviews psychologist Paul Slovic, who has been researching human responses to risk and compassion since the 1970s. They discuss the psychological impact of large numbers of people on our ability and willingness to respond compassionately and to act on that compassion. They look at Slovic's research into the concepts of psychic numbing and the prominence effect, focusing on the global refugee crisis and why individuals and governments fail to act in the face of immense suffering.
  19. Content Article
    The widespread adoption of effective hybrid closed loop systems would benefit people living with type 1 diabetes by improving the amount of time spent within target blood glucose range. Hybrid closed loop systems (also known as 'artificial pancreas' typically utilise simple control algorithms to select the best insulin dose for maintaining blood glucose levels within a healthy range. Online reinforcement learning has been utilised as a method for further enhancing glucose control in these devices. Previous approaches have been shown to reduce patient risk and improve time spent in the target range when compared to classical control algorithms, but are prone to instability in the learning process, often resulting in the selection of unsafe actions. This study in the Journal of Biomedical Informatics presents an evaluation of offline reinforcement learning for developing effective dosing policies without the need for potentially dangerous patient interaction during training.
  20. Content Article
    Healthcare systems rely on self-advocacy from service users to maintain the safety and quality of care. Systemic bias, service pressures and workforce issues often deny agency to patients at times when they need to have most control over representation of their story. This drives diagnostic error, treatment delay or failure to treat important conditions. In maternal care, perinatal mental health and thrombosis are significant challenges. With funding from SBRI Health care, Ulster University and Southern Health and Social Care Trust are developing an NLP powered platform that will empower mothers to be more active agents in their perinatal care. Download the poster below.
  21. Content Article
    This article looks at the experience of Tammy Dobbs, who has cerebral palsy and requires extensive support from home carers to carry out daily tasks. In 2016, Tammy's care needs were reassessed by the state of Arkansas where she lives, and the hours of support she was eligible to receive were cut in half. The change in eligibility was due to a new state-approved algorithm that had calculated her support needs in a new way, in spite of the fact that there was no change to her level of need.  The situation caused Tammy much distress and resulted in drastic life changes. The article highlights the issues associated with the use of algorithms to determine need and allocate resources in health and social care. It also raises questions about what transparency means in an automated age and highlights concerns about people’s ability to contest decisions made by machines.
  22. Content Article
    Many AI models are being developed and applied to understand opioid use. However, authors of this paper, published in BMJ Innovations, found there is a need for these AI technologies to be externally validated and robustly evaluated to determine whether they can improve the use and safety of opioids.
  23. News Article
    A cervical cancer patient has been treated with the aid of artificial intelligence (AI) for the first time in the UK. Emma McCormick, 44, was treated at the St Luke's Cancer Centre in Guildford, Surrey. The Royal Surrey NHS Foundation Trust treated Ms McCormick, who is from West Sussex, using adaptive radiotherapy. The AI technology uses daily CT scans to target the specific areas that need radiotherapy. This helps to avoid damage to healthy tissue and limit side-effects, the hospital said. Patients are given treatments lasting between 20 and 25 minutes, although Ms McCormick's was slightly longer as she was the first patient, a hospital spokesman said. Ms McCormick received five AI-guided treatments per week for five weeks before having a further two weeks of brachytherapy. She said: "If it works for me, and they get information from me, it can help somebody else. It definitely worked and did what it was meant to do and so hopefully that helps others." Dr Alex Stewart, who treated Ms McCormick, said one of the benefits of the treatment was that it allowed for more precision, meaning there were fewer side-effects for the patients. Read full story Source: BBC News, 21 January 2022
  24. News Article
    Researchers are to use artificial intelligence (AI) in the hope of reducing risk to pregnant black women. Loughborough University experts are to work with the Healthcare Safety Investigation Branch (HSIB) to identify patterns in its recent investigations. Research has suggested black women are more than four times more likely to die in pregnancy or childbirth than white women in the UK. The researchers plan to look at more than 600 of HSIB's recent investigations into adverse outcomes during pregnancy and birth. The research team will develop a machine learning system capable of identifying factors, based on a set of codes, that contribute to harm during pregnancy and birth experienced by black families. These include biological factors, such as obesity or birth history; social and economic factors such as language barriers and unemployment; and the quality of care and communication with the mother. It will look at how these elements interact with and influence each other, and help researchers design ways to improve the care of black mothers and babies. Dr Patrick Waterson, from the university, who is helping to lead the project, said: "Ultimately, we believe the outcomes from our research have the potential to transform the NHS's ability to reduce maternal harm amongst mothers from black ethnic groups." He added that in the longer term, the research could improve patient safety for all mothers. Read full story Source: BBC News, 17 November 2021
  25. News Article
    Artificial intelligence (AI) systems being developed to diagnose skin cancer run the risk of being less accurate for people with dark skin, research suggests. The potential of AI has led to developments in healthcare, with some studies suggesting image recognition technology based on machine learning algorithms can classify skin cancers as successfully as human experts. NHS trusts have begun exploring AI to help dermatologists triage patients with skin lesions. But researchers say more needs to be done to ensure the technology benefits all patients, after finding that few freely available image databases that could be used to develop or “train” AI systems for skin cancer diagnosis contain information on ethnicity or skin type. Those that do have very few images of people with dark skin. Dr David Wen, first author of the study from the University of Oxford, said: “You could have a situation where the regulatory authorities say that because this algorithm has only been trained on images in fair-skinned people, you’re only allowed to use it for fair-skinned individuals, and therefore that could lead to certain populations being excluded from algorithms that are approved for clinical use." “Alternatively, if the regulators are a bit more relaxed and say: ‘OK, you can use it [on all patients]’, the algorithms may not perform as accurately on populations who don’t have that many images involved in training.” That could bring other problems including risking avoidable surgery, missing treatable cancers and causing unnecessary anxiety, the team said. Read full story Source: The Guardian, 9 November 2021
×
×
  • Create New...