Jump to content

Search the hub

Showing results for tags 'AI'.


More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


Forums

  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous

Categories

  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient Safety Partners
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • Patient Safety Standards
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training & education
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous

News

  • News

Find results in...

Find results that contain...


Date Created

  • Start
    End

Last updated

  • Start
    End

Filter by number of...

Joined

  • Start

    End


Group


First name


Last name


Country


Join a private group (if appropriate)


About me


Organisation


Role

Found 156 results
  1. Content Article
    This article looks at how Sheba Medical Center in Tel Aviv, one of the largest health systems in the region, has used artificial intelligence to turn around statistics on patient safety. In 2016, the Accelerate Redesign Collaborate Innovation Center at Sheba launched a an AI solution called Aidoc to read CT scans. It is being used to more accurately predict stroke and pulmonary embolism, allowing healthcare professionals to offer preventative treatment more quickly that when CT scans are read purely manually.
  2. News Article
    Voices offer lots of information. Turns out, they can even help diagnose an illness — and researchers in the USA are working on an app for that. The National Institutes of Health is funding a massive research project to collect voice data and develop an AI that could diagnose people based on their speech. Everything from your vocal cord vibrations to breathing patterns when you speak offers potential information about your health, says laryngologist Dr. Yael Bensoussan, the director of the University of South Florida's Health Voice Center and a leader on the study. "We asked experts: Well, if you close your eyes when a patient comes in, just by listening to their voice, can you have an idea of the diagnosis they have?" Bensoussan says. "And that's where we got all our information." Someone who speaks low and slowly might have Parkinson's disease. Slurring is a sign of a stroke. Scientists could even diagnose depression or cancer. The team will start by collecting the voices of people with conditions in five areas: neurological disorders, voice disorders, mood disorders, respiratory disorders and pediatric disorders like autism and speech delays. This isn't the first time researchers have used AI to study human voices, but it's the first time data will be collected on this level — the project is a collaboration between USF, Cornell and 10 other institutions. The ultimate goal is an app that could help bridge access to rural or underserved communities, by helping general practitioners refer patients to specialists. Long term, iPhones or Alexa could detect changes in your voice, such as a cough, and advise you to seek medical attention. Read full story Source: NPR, 10 October 2022
  3. News Article
    Technology and healthcare companies are racing to roll out new tools to test for and eventually treat the coronavirus epidemic spreading around the world. But one sector that is holding back are the makers of artificial-intelligence-enabled diagnostic tools, increasingly championed by companies, healthcare systems and governments as a substitute for routine doctor-office visits. In theory, such tools, sometimes called “symptom checkers” or healthcare bots,sound like an obvious short-term fix: they could be used to help assess whether someone has Covid-19, the illness caused by the novel coronavirus, while keeping infected people away from crowded doctor’s offices or emergency rooms where they might spread it. These tools vary in sophistication. Some use a relatively simple process, like a decision tree, to provide online advice for basic health issues. Other services say they use more advanced technology, like algorithms based on machine learning, that can diagnose problems more precisely. But some digital-health companies that make such tools say they are wary of updating their algorithms to incorporate questions about the new coronavirus strain. Their hesitancy highlights both how little is known about the spread of Covid-19 and the broader limitations of healthcare technologies marketed as AI in the face of novel, fast-spreading illnesses. Some companies say they don’t have enough data about the new coronavirus to plug into their existing products. London-based symptom-checking app Your.MD Ltd. recently added a “coronavirus checker” button that leads to a series of questions about symptoms. But it is based on a simple decision tree. The company said it won’t update the more sophisticated technology underpinning its main system, which is based on machine learning. “We made a decision not to do it through the AI because we haven’t got the underlying science,” said Maureen Baker, Chief Medical Officer for Your.MD. She said it could take 6 to 12 months before sufficient peer-reviewed scientific literature becomes available to help inform the redesign of algorithms used in today’s more advanced symptom checkers. Read full story Source: The Wall Street Journal, 29 February 2020
  4. News Article
    In his latest blog post, Matthew Gould, CEO of NHSX, has reiterated the potential AI has to reduce the burden on the NHS by improving patient outcomes and increasing productivity. However, he said there are gaps in the rules that govern the use of AI and a lack of clarity on both standards and roles. These gaps mean there is a risk of using AI that is unsafe and that NHS organisations will delay employing AI until all the regulatory gaps have been filled. Gould says, “The benefits will be huge if we can find the sweet spot” that allows trust to be maintained whilst creating the freedom for innovation but warns that we are not in that position yet. At the end of January, the CEOs and heads of 12 regulators and associated organisations met to work through these issues and discuss what was required to ensure innovation-friendly processes and regulations are put in place. They agreed there needs to be a clarity of role for these organisations, including the MHRA being responsible for regulating the safety of AI systems; the Health Research Agency (HRA) for overseeing the research to generate evidence; NICE for assessing whether new AI solutions should be deployed; and the CQC to ensure providers are following best practice. Read the full blog Source: Techradar, 13 February 2020
  5. News Article
    London doctors are using artificial intelligence to predict which patients with chest pains are at greatest risk of death. A trial at Barts Heart Centre, in Smithfield, and the Royal Free Hospital, in Hampstead, found that poor blood flow was a “strong predictor” of heart attack, stroke and heart failure. Doctors used computer programmes to analyse images of the heart from more than 1,000 patients and cross-referenced the scans with their health over the next two years. The computers were “taught” to search for indicators of future “adverse cardiovascular outcomes” and are now used in a real-time basis to help doctors identify who is most at risk. Read full story Source: Evening Standard, 15 February 2020
  6. News Article
    In a keynote speech at the Healthtech Alliance on Tuesday, Secretary of State for Health and Social Care, Matt Hancock, stressed how important adopting technology in healthcare is and why he believes that it is vital for the NHS to move into the digital era. “Today I want to set out the future for technology in the NHS and why the techno-pessimists are wrong. Because for any organisation to be the best it possibly can be, rejecting the best possible technology is a mistake.” Listing examples from endless paperwork to old systems resulting in wasted blood samples, Hancock highlights why in order to retain staff and see a thriving healthcare, embracing technology must be a priority. He also announced a £140m Artificial Intelligence (AI) competition to speed up testing and delivery of potential NHS tools. The competition will cover all stages of the product cycle, to proof of concept to real-world testing to initial adoption in the NHS. Examples of AI use currently being trialled were set out in the speech, including using AI to read mammograms, predict and prevent the risk of missed appointments and AI-assisted pathways for same-day chest X-ray triage. Tackling the issue of scalability, Hancock said, “Too many good ideas in the NHS never make it past the pilot stage. We need a culture that rewards and incentivises adoption as well as invention.” Read full speech
  7. News Article
    Artificial intelligence can diagnose brain tumours more accurately than a pathologist in a tenth of the time, a study has shown. The machine-learning technology was marginally more accurate than a traditional diagnosis made by a pathologist, by just 1%, but the results were available in less than 2 minutes and 30 seconds, compared with 20 to 30 minutes by a pathologist. The study, published in Nature Medicine, demonstrates the speed and accuracy of AI diagnosis for brain surgery, allowing surgeons to detect and remove otherwise undetectable tumour tissue. Daniel Orringer, an Associate Professor of Neurosurgery at New York University's Grossman School of Medicine and a senior author, said: “As surgeons, we’re limited to acting on what we can see; this technology allows us to see what would otherwise be invisible to improve speed and accuracy in the [operating theatre] and reduce the risk of misdiagnosis." “With this imaging technology, cancer operations are safer and more effective than ever before.” Read full story Source: The Independent, 6 January 2020
  8. News Article
    Artificial intelligence is more accurate than doctors in diagnosing breast cancer from mammograms, a study in the journal Nature suggests. An international team, including researchers from Google Health and Imperial College London, designed and trained a computer model on X-ray images from nearly 29,000 women. The algorithm outperformed six radiologists in reading mammograms. AI was still as good as two doctors working together. Unlike humans, AI is tireless. Experts say it could improve detection. Sara Hiom, director of cancer intelligence and early diagnosis at Cancer Research UK, told the BBC: "This is promising early research which suggests that in future it may be possible to make screening more accurate and efficient, which means less waiting and worrying for patients, and better outcomes." Read full story Source: BBC News, 2 January 2020
  9. News Article
    Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots. IBM boasted that its AI could “outthink cancer.” Others say computer systems that read X-rays will make radiologists obsolete. Yet many health industry experts fear AI-based products won’t be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra “fail fast and fix it later,” is putting patients at risk and that regulators aren’t doing enough to keep consumers safe. Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanford’s Center for Biomedical Ethics. Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need. “It’s only a matter of time before something like this leads to a serious health problem,” said Steven Nissen, chairman of cardiology at the Cleveland Clinic. Read full story Source: Scientific American, 24 December 2019
  10. News Article
    MedAware, a developer of AI-based patient safety solutions, has announced the publication of a study by The Joint Commission Journal on Quality and Patient Safety, validating both the significant clinical impact and anticipated ROI of MedAware's machine learning-enabled clinical decision support platform designed to prevent medication-related errors and risks. The study analysed MedAware's clinical relevance and accuracy and estimated the platform's direct cost savings for adverse events potentially prevented in Massachusetts General and Brigham and Women's Hospitals' outpatient clinics. If the system had been operational, the estimated direct cost savings of the avoidable adverse events would have been more than $1.3 million when extrapolating the study's findings to the full patient population. Dr David Bates, study co-author, Professor at Harvard Medical School, and Director of the Center for Patient Safety Research & Practice at Brigham and Women's Hospital, said: "Because it is not rule-based, MedAware represents a paradigm shift in medication-related risk mitigation and an innovative approach to improving patient safety." Read full story Source: CISION PR Newswire, 16 December 2019
  11. News Article
    Royal Cornwall Hospital has deployed an artificial intelligence (AI) tool that allows clinicians to view case videos safely and securely. Touch Surgery Enterprise enables automatic processing and viewing of surgical videos for clinicians and their teams without compromising sensitive patient data. These videos can be accessed via mobile app or web shortly after the operation to encourage self-reflection, peer review and improve preoperative preparation. James Clark, consultant upper gastrointestinal and bariatric surgeon at the trust, said: “Having seamless access to my surgical videos has had an immense impact on my practice both in terms of promoting patient safety and for educating the next generation of surgeons." Read full story Source: Digital Health, 28 November 2019
  12. News Article
    East Kent Hospitals University NHS Foundation Trust has adopted artificial intelligence (AI) to test the health of patient’s eyes. In collaboration with doctors at the trust, the University of Kent has developed AI computer software able to detect signs of eye disease. Patients will benefit from a machine-based method that compares new images of the eye with previous patient images to monitor clinical signs and notify the doctor if their condition has worsened. Nishal Patel, an Ophthalmology Consultant at the Trust and teacher at the University said: “We are seeing more and more people with retinal disease and machines can help with some of the capacity issues faced by our department and others across the country." “We are not taking the job of a doctor away, but we are making it more efficient and at the same time helping determine how artificial intelligence will shape the future medicine. By automating some of the decisions, so that stable patients can be monitored and unstable patients treated earlier, we can offer better outcomes for our patients.” Read full story Source: National Health Executive, 22 November 2019
  13. Content Article
    Huge numbers of patients suffer avoidable harm in US hospitals each year as a result of unsafe care. In this blog, published in the Harvard Business Review, the authors argue that these numbers could be greatly reduced by taking four actions: Make patient safety a top priority in hospitals’ practices and cultures, establish a National Patient Safety Board, create a national patient and staff reporting mechanism, and turn on EHRs machine learning systems that can alert staff to risky conditions.
  14. Content Article
    Artificial intelligence systems for healthcare, like any other medical device, have the potential to fail. In this article, published in The Lancet: Digital Health, the authors recommend a medical algorithmic audit framework as a tool that can be used to better understand the weaknesses of an artificial intelligence system and put in place mechanisms to mitigate their impact. They propose that this framework should be the joint responsibility of users and developers who can collaborate to ensure patient safety and correct performance of the system in question.
  15. Content Article
    Gender bias in healthcare is a well-recognised issue. From diagnosis to drug development and treatment, the modern healthcare system has been shown to advantage men over women. Responsibly designed artificial intelligence (AI) and machine learning algorithms have the potential to overcome gender bias in medicine. However, if machine learning methods are implemented without careful thought and consideration they can lead to the perpetuation and even accentuation of existing biases. How can we develop technology in a way that prevents rather than perpetuates bias? This blog from Babylon highlights 4 key principles that can help.
  16. Content Article
    Medication errors can occur at any point in the system for prescribing, dispensing and administering drugs in the NHS – and can often be the result of human errors creeping in as burned out staff misread or miscalculate the amount needed. This article in the Health Services Journal examines how closed loop medication management systems can improve patient safety by ensuring patients are prescribed the right dosage of the right medications. The author speaks to Islam Elkonaissi, former lead pharmacist for cancer services in Cambridge, about the importance of well-planned implementation and bridging the gap between IT specialists and healthcare workers to make sure that potential for communication errors is minimised. They also discuss the value of the huge amounts of data AI systems can collect, which in turn make the systems more precise and accurate.
  17. Content Article
    A key part of healthcare digital transformation is the development and adoption of artificial intelligence technologies. This article, published in BMJ Health & Care Informatics, considers how human factors and ergonomic principles can be applied to the use of artificial intelligence in healthcare.
  18. Content Article
    Skin cancer is one of the most common cancers worldwide, with one in five people in the US expected to receive a skin cancer diagnosis during their lifetime. Detecting and treating skin cancers early is key to improving survival rates. This blog for The Medical Futurist looks at the emergence of skin-checking algorithms and how they will assist dermatologists in swift diagnosis. It reviews research into the effectiveness of algorithms in detecting cancer, and examines the issues of regulation, accessibility and the accuracy of smartphone apps.
  19. Content Article
    Failure to attend scheduled hospital appointments disrupts clinical management and consumes resource estimated at £1 billion annually in the UK NHS alone. Accurate stratification of absence risk can maximise the yield of preventative interventions. The wide multiplicity of potential causes, and the poor performance of systems based on simple, linear, low-dimensional models, suggests complex predictive models of attendance are needed. In this paper, Nelson et al. quantify the effect of using complex, non-linear, high-dimensional models enabled by machine learning.
  20. Content Article
    Artificial intelligence (AI) is increasingly being used in medicine to help with the diagnosis of diseases such as skin cancer. To be able to assist with this, AI needs to be ‘trained’ by looking at data and images from a large number of patients where the diagnosis has already been established, so an AI programme depends heavily upon the information it is trained on. This review, published in The Lancet Digital Health, looked at all freely accessible sets of data on skin lesions around the world.
  21. Content Article
    The Healthcare Safety Investigation Branch (HSIB) identified a patient safety risk caused by delays in diagnosing lung cancer. Lung cancer is the third most common cancer diagnosed in England, but accounts for the most deaths. Two-thirds of patients with lung cancer are diagnosed at an advanced stage of the disease when curative treatment is no longer possible, a fact which is reflected in some of the lowest five-year survival rates in Europe. Chest X-ray is the first test used to assess for lung cancer, but about 20% of lung cancers will be missed on X-rays. This results in delayed diagnosis that will potentially affect a patient’s prognosis. The HSIB investigation reviewed the experience of a patient who saw their GP multiple times and had three chest X-rays where the possible cancer was not identified. This resulted in an eight-month delay in diagnosis and potentially limited the patient’s treatment options.
  22. Content Article
    This new book by Professor Harold Thimbleby of Swansea University tells stories of widespread problems with digital healthcare and explores how they can be overcome. "The stories and their resolutions will empower patients, clinical staff and digital developers to help transform digital healthcare to make it safer and more effective."
  23. Content Article
    The Chartered Institute of Ergonomics & Human Factors (CIEHF) have published a new white paper intended to promote systems thinking among those who develop, regulate, procure, and use AI applications in healthcare, and to raise awareness of the role of people using or affected by AI.
  24. Content Article
    Patient safety and digital experts have given their views on immediate digital priorities that could make a significant difference in the NHS.
  25. Content Article
    In his newsletter today (The Top 10 Dangers of Digital Health), the medical futurist, Bertalan Meskó, raises some very topical questions about the dangers of digital health. As a huge advocate of the benefits of digital health, I am aware of most of these but tend to downplay the negative aspects as I generally believe that in this domain the good outweighs the bad. However, as I was reading his article, I realised that it was written very much from the perspective of a clinician and, to some extent, a healthcare organisation too. The patient perspective was included but not from a patient safety angle. Many of the issues that he raises do have significant patient safety issues associated with them which I’d like to share in this blog.
×
×
  • Create New...