Jump to content

Search the hub

Showing results for tags 'AI'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
  • Culture
    • Bullying and fear
    • Good practice
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
  • Organisations linked to patient safety (UK and beyond)
  • Patient engagement
  • Patient safety in health and care
  • Patient Safety Learning
  • Professionalising patient safety
  • Research, data and insight
  • Miscellaneous

News

  • News

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


About me


Organisation


Role

Found 23 results
  1. News Article
    Technology and healthcare companies are racing to roll out new tools to test for and eventually treat the coronavirus epidemic spreading around the world. But one sector that is holding back are the makers of artificial-intelligence-enabled diagnostic tools, increasingly championed by companies, healthcare systems and governments as a substitute for routine doctor-office visits. In theory, such tools, sometimes called “symptom checkers” or healthcare bots,sound like an obvious short-term fix: they could be used to help assess whether someone has Covid-19, the illness caused by the novel coronavirus, while keeping infected people away from crowded doctor’s offices or emergency rooms where they might spread it. These tools vary in sophistication. Some use a relatively simple process, like a decision tree, to provide online advice for basic health issues. Other services say they use more advanced technology, like algorithms based on machine learning, that can diagnose problems more precisely. But some digital-health companies that make such tools say they are wary of updating their algorithms to incorporate questions about the new coronavirus strain. Their hesitancy highlights both how little is known about the spread of Covid-19 and the broader limitations of healthcare technologies marketed as AI in the face of novel, fast-spreading illnesses. Some companies say they don’t have enough data about the new coronavirus to plug into their existing products. London-based symptom-checking app Your.MD Ltd. recently added a “coronavirus checker” button that leads to a series of questions about symptoms. But it is based on a simple decision tree. The company said it won’t update the more sophisticated technology underpinning its main system, which is based on machine learning. “We made a decision not to do it through the AI because we haven’t got the underlying science,” said Maureen Baker, Chief Medical Officer for Your.MD. She said it could take 6 to 12 months before sufficient peer-reviewed scientific literature becomes available to help inform the redesign of algorithms used in today’s more advanced symptom checkers. Read full story Source: The Wall Street Journal, 29 February 2020
  2. News Article
    In his latest blog post, Matthew Gould, CEO of NHSX, has reiterated the potential AI has to reduce the burden on the NHS by improving patient outcomes and increasing productivity. However, he said there are gaps in the rules that govern the use of AI and a lack of clarity on both standards and roles. These gaps mean there is a risk of using AI that is unsafe and that NHS organisations will delay employing AI until all the regulatory gaps have been filled. Gould says, “The benefits will be huge if we can find the sweet spot” that allows trust to be maintained whilst creating the freedom for innovation but warns that we are not in that position yet. At the end of January, the CEOs and heads of 12 regulators and associated organisations met to work through these issues and discuss what was required to ensure innovation-friendly processes and regulations are put in place. They agreed there needs to be a clarity of role for these organisations, including the MHRA being responsible for regulating the safety of AI systems; the Health Research Agency (HRA) for overseeing the research to generate evidence; NICE for assessing whether new AI solutions should be deployed; and the CQC to ensure providers are following best practice. Read the full blog Source: Techradar, 13 February 2020
  3. News Article
    London doctors are using artificial intelligence to predict which patients with chest pains are at greatest risk of death. A trial at Barts Heart Centre, in Smithfield, and the Royal Free Hospital, in Hampstead, found that poor blood flow was a “strong predictor” of heart attack, stroke and heart failure. Doctors used computer programmes to analyse images of the heart from more than 1,000 patients and cross-referenced the scans with their health over the next two years. The computers were “taught” to search for indicators of future “adverse cardiovascular outcomes” and are now used in a real-time basis to help doctors identify who is most at risk. Read full story Source: Evening Standard, 15 February 2020
  4. News Article
    Artificial intelligence is more accurate than doctors in diagnosing breast cancer from mammograms, a study in the journal Nature suggests. An international team, including researchers from Google Health and Imperial College London, designed and trained a computer model on X-ray images from nearly 29,000 women. The algorithm outperformed six radiologists in reading mammograms. AI was still as good as two doctors working together. Unlike humans, AI is tireless. Experts say it could improve detection. Sara Hiom, director of cancer intelligence and early diagnosis at Cancer Research UK, told the BBC: "This is promising early research which suggests that in future it may be possible to make screening more accurate and efficient, which means less waiting and worrying for patients, and better outcomes." Read full story Source: BBC News, 2 January 2020
  5. News Article
    MedAware, a developer of AI-based patient safety solutions, has announced the publication of a study by The Joint Commission Journal on Quality and Patient Safety, validating both the significant clinical impact and anticipated ROI of MedAware's machine learning-enabled clinical decision support platform designed to prevent medication-related errors and risks. The study analysed MedAware's clinical relevance and accuracy and estimated the platform's direct cost savings for adverse events potentially prevented in Massachusetts General and Brigham and Women's Hospitals' outpatient clinics. If the system had been operational, the estimated direct cost savings of the avoidable adverse events would have been more than $1.3 million when extrapolating the study's findings to the full patient population. Dr David Bates, study co-author, Professor at Harvard Medical School, and Director of the Center for Patient Safety Research & Practice at Brigham and Women's Hospital, said: "Because it is not rule-based, MedAware represents a paradigm shift in medication-related risk mitigation and an innovative approach to improving patient safety." Read full story Source: CISION PR Newswire, 16 December 2019
  6. News Article
    Royal Cornwall Hospital has deployed an artificial intelligence (AI) tool that allows clinicians to view case videos safely and securely. Touch Surgery Enterprise enables automatic processing and viewing of surgical videos for clinicians and their teams without compromising sensitive patient data. These videos can be accessed via mobile app or web shortly after the operation to encourage self-reflection, peer review and improve preoperative preparation. James Clark, consultant upper gastrointestinal and bariatric surgeon at the trust, said: “Having seamless access to my surgical videos has had an immense impact on my practice both in terms of promoting patient safety and for educating the next generation of surgeons." Read full story Source: Digital Health, 28 November 2019
  7. News Article
    East Kent Hospitals University NHS Foundation Trust has adopted artificial intelligence (AI) to test the health of patient’s eyes. In collaboration with doctors at the trust, the University of Kent has developed AI computer software able to detect signs of eye disease. Patients will benefit from a machine-based method that compares new images of the eye with previous patient images to monitor clinical signs and notify the doctor if their condition has worsened. Nishal Patel, an Ophthalmology Consultant at the Trust and teacher at the University said: “We are seeing more and more people with retinal disease and machines can help with some of the capacity issues faced by our department and others across the country." “We are not taking the job of a doctor away, but we are making it more efficient and at the same time helping determine how artificial intelligence will shape the future medicine. By automating some of the decisions, so that stable patients can be monitored and unstable patients treated earlier, we can offer better outcomes for our patients.” Read full story Source: National Health Executive, 22 November 2019
  8. Content Article
    The use of artificial intelligence (AI) in patient care can offer significant benefits. However, there is a lack of independent evaluation considering AI in use. This paper from Sujan et al., published in BMJ Health & Care Informatics, argues that consideration should be given to how AI will be incorporated into clinical processes and services. Human factors challenges that are likely to arise at this level include cognitive aspects (automation bias and human performance), handover and communication between clinicians and AI systems, situation awareness and the impact on the interaction with patients. Human factors research should accompany the development of AI from the outset.
  9. Content Article
    This report covers research that has been conducted by NHSX and a great number of partners across the digital health ecosystem into: What AI is and where it's being used. How to govern AI. How to protect patient safety. How to support the workforce. How to encourage adoption and spread. The results of this research ultimately lead us to the conclusion that the creation of the Lab will be essential if we are to capitalise on the opportunities identified, whilst mitigating the risks.
  10. News Article
    Artificial intelligence can diagnose brain tumours more accurately than a pathologist in a tenth of the time, a study has shown. The machine-learning technology was marginally more accurate than a traditional diagnosis made by a pathologist, by just 1%, but the results were available in less than 2 minutes and 30 seconds, compared with 20 to 30 minutes by a pathologist. The study, published in Nature Medicine, demonstrates the speed and accuracy of AI diagnosis for brain surgery, allowing surgeons to detect and remove otherwise undetectable tumour tissue. Daniel Orringer, an Associate Professor of Neurosurgery at New York University's Grossman School of Medicine and a senior author, said: “As surgeons, we’re limited to acting on what we can see; this technology allows us to see what would otherwise be invisible to improve speed and accuracy in the [operating theatre] and reduce the risk of misdiagnosis." “With this imaging technology, cancer operations are safer and more effective than ever before.” Read full story Source: The Independent, 6 January 2020
  11. Content Article
    CARS estimates the risk of death following emergency admission to medical wards using routinely collected vital signs and blood test data. The aim of the study was to elicit the views of: Healthcare practitioners (staff) and service users and carers on the potential value, unintended consequences and concerns associated with CARS. Practitioner views on the issues to consider before embedding CARS into routine practice.
  12. Community Post
    Artificial Intelligence is creating a lot of buzz in the US and around the world. This perspective from the US site AHRQ Patient Safety Net explores a range of issues that could affect the uptake artificial intelligence systems in health care. What do hub members think? Are we destined to encounter Hal (from 2001: a Space Odyssey) or Samantha (from Her)? Emerging safety issues in artificial intelligence
  13. News Article
    In a keynote speech at the Healthtech Alliance on Tuesday, Secretary of State for Health and Social Care, Matt Hancock, stressed how important adopting technology in healthcare is and why he believes that it is vital for the NHS to move into the digital era. “Today I want to set out the future for technology in the NHS and why the techno-pessimists are wrong. Because for any organisation to be the best it possibly can be, rejecting the best possible technology is a mistake.” Listing examples from endless paperwork to old systems resulting in wasted blood samples, Hancock highlights why in order to retain staff and see a thriving healthcare, embracing technology must be a priority. He also announced a £140m Artificial Intelligence (AI) competition to speed up testing and delivery of potential NHS tools. The competition will cover all stages of the product cycle, to proof of concept to real-world testing to initial adoption in the NHS. Examples of AI use currently being trialled were set out in the speech, including using AI to read mammograms, predict and prevent the risk of missed appointments and AI-assisted pathways for same-day chest X-ray triage. Tackling the issue of scalability, Hancock said, “Too many good ideas in the NHS never make it past the pilot stage. We need a culture that rewards and incentivises adoption as well as invention.” Read full speech
  14. Content Article
    Speaking on 2 October at the Healthcare Excellence Through Technology conference, Heather Caudle and Ijeoma Azodo, both members of the Shuri Network, stressed the importance of diversity when developing new technologies like artificial intelligence (AI).
  15. News Article
    Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots. IBM boasted that its AI could “outthink cancer.” Others say computer systems that read X-rays will make radiologists obsolete. Yet many health industry experts fear AI-based products won’t be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra “fail fast and fix it later,” is putting patients at risk and that regulators aren’t doing enough to keep consumers safe. Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanford’s Center for Biomedical Ethics. Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need. “It’s only a matter of time before something like this leads to a serious health problem,” said Steven Nissen, chairman of cardiology at the Cleveland Clinic. Read full story Source: Scientific American, 24 December 2019
×