Jump to content
  • Posts

    208
  • Joined

  • Last visited

Clive Flashman

Administrators

Posts posted by Clive Flashman

  1. I do often wonder in the hype to join the AI bandwagon, where every new solution seems to have 'AI inside', how much of it really is AI?
    In the early days of AI (5-7 years ago), new solutions required expert clinicians to spends 6-9 months training them, which of course impacted on the amount of time they could then give to their patients. With ML, is this really changing? Can an AI that improves through ML be as good as one that is 'taught' by clinicians? Is it as safe?

  2. Below is an interesting blog from the mum of a cancer patient detailing the horrendous time they had, having to deal with the fragmented information held by various clinicians about their child's health. The 'Cancer mum' makes a very good case for needing a single longitudinal record, and how this would contribute to patient safety.

    https://cancermumblog.wordpress.com/2019/04/19/cancer-mum-so-why-do-patients-and-carers-want-health-records/ 

    We often hear about the safety issues with the use of electronic health records, so this is an interesting opposing perspective.

    What do you think?

  3. I’ve just been listening to the 10 o’clock news tonight and it has been covering the report into Paterson, the breast surgeon who may have needlessly operated on thousands on women. 

    One of the recommendations is that patient safety should be a ‘top priority’ across the NHS (again!!). 

    Another interesting recommendation is that the NHS (and private healthcare providers) need to be better at sharing information about medical staff. Currently, medical staff seem to be able to be investigated in one hospital, and then move to another without any of their history following them. Maybe we need some sort of central system, like Doctify for employers?

    What do you think?
     

     

  4. Thanks for drawing my attention to this @HelenH. It's interesting that among the aims of the new AI Lab, are two that are loosely linked to safety of some kind, albeit not Patient safety as we would normally view it. It is mentioned in the Exec Summary as being of relevance to the section on Governance (Chapter 3). 

    In Chapter 2, the limitations of the current governance framework based on a survey are said to be "perhaps limiting innovation and potentially risking patient safety". I'm not clear where the evidence comes from to make that latter point, or if it is being loosely associated with the perceived issues related to collection, management and use of patient data. This is just one aspect of patient safety - there is a lot more that is not being considered here.

    In Chapter 3, (from the beginning) one of the reasons given for the need for ethics & regulation is patient safety. The algorithmic considerations also mention safety, which in my view is much more fundamental to the whole discussion around patient safety and AI. There should be an awful lot more written about this in Chapter 3, the ommission is significant.

    Perhaps one of the most telling statements is in the appendixed case study on Genomics England: "...safety is crucial and it is vital that the system is able to guarantee the integrity in the diagnoses and treatment plans which are delivered to patients, whether these are facilitated by AI, or more traditional methods."

×
×
  • Create New...