Jump to content
  • Posts

    15
  • Joined

  • Last visited

Richard Jones

Members

Posts posted by Richard Jones

  1. The assurance part is very complex indeed.

    The difference between deterministic and non-deterministic AI is fascinating.  The non-deterministic is the greater challenge for regulation.  I don't envy those trying to come up with effective solutions.

    A simple search on Google Bard on me suggests my MBA is from three different places in three different drafts.  None are correct.

  2. Projections indicate that there could be as much as 2,314 exabytes of new data generated in 2020.

    That’s 2,314 billion gigabytes of data.

    With a population of nearly 8 billion globally, that’s around 300 gigabytes of data per person per year.

    Is this realistic?

    How much of this data is being stored on phones and smartwatches, Fitbits etc.?

    So who has this data and how useful is it when it sits in a commercial company’s silo and does not complement health system’s own data?

    One simple truth - that volume of data requires collation, curation, contemplation (sorry - on an alliterative roll here).. but it really needs smart systems to convert it from data to wisdom.  Are we on the right path or are we drowning in the data?

  3. I think many of us in the industry are still wondering about access to data and who should have it.  NHS Digital do a great job of protecting access to health records and as one of the companies that has earned the rigth to access the national records, I can say it is a very rigorous process to maintain that privilege.

    More broadly we are seeing companies get in trouble for using data in the wrong way from individual hospitals, non-anonymised records being shipped (by mistake) to a company and other things that citizens in some countries (I'm looking at friends in Sweden) would find unacceptable.

    So who owns your data?  Are you happy for it to be sold or just passed on to companies, or do you want the opposite end of the spectrum where you have control over it and sharing beyond your direct health providers requires your consent (and maybe.. whisper it quietly.. payment).

    There is no one right answer but I'm fascinated by how we deal with data and a swing the door open policy and let favoured companies get access to it willy nilly doesn't seem like the smartest idea.  But maybe I'm wrong...

  4. I think there is potential to develop scenarios far quicker and more tailored to particular situations.  So for example, you can create AI based images in bulk to show a clinician far more cases than they would normally see and build systems to keep people up to date and up to scratch.  You can build subtler cases in bulk to help discrimination between different cases of an illness or disease.  You can create synthetic data sets to test medical software and build in whatever bias you need to truly test something by packing the data with suitable case profiles while actual anonymised data may have only a handful.  So there's lots of potential.

    But true AI software can modify itself and will not always give the same answer... so we need to be careful about the application and also remember the basics.  Surgeon told me a story about a relative who had died and their x-ray.  He asked the doctor who looked after his mother to comment on the x-ray which they did.  Lots of comments on thumbprint marks etc. but actually completely failed to notice the name on the x-ray was not the name of the relative.  It wasn't their x-ray.  So as smart as we get.. we still need discipline and people like PSL helping staff set the right standards, do the right thing and be able to point out poor practice in a safe way.  

  5. The wonderful team here at Patient Safety Learning think we need to talk about AI and the impact it can have on healthcare.

    So I'll be putting up a few topic starters in here but feel free to use this space and start your own conversations.

    AI means two things at the limit.  It means software can change without instruction and the answers can sometimes change between a 'yes' or a 'no' for the same question.

    So how do we build safe, dependable applications that incorporate AI?

    How do we test them?

    How do we approve them?

    In the pandemic there is a rush to deploy solutions that is a commendable change of pace but at what cost?

    We should have authentic conversations here and I'm looking forward to discussing the topics above and many more with you.

    Ricahrd Jones

    shutterstock_640816636 (1).jpg

×
×
  • Create New...