Jump to content

Welcome - AI - The challenge of adopting safe solutions



Recommended Posts

The wonderful team here at Patient Safety Learning think we need to talk about AI and the impact it can have on healthcare.

So I'll be putting up a few topic starters in here but feel free to use this space and start your own conversations.

AI means two things at the limit.  It means software can change without instruction and the answers can sometimes change between a 'yes' or a 'no' for the same question.

So how do we build safe, dependable applications that incorporate AI?

How do we test them?

How do we approve them?

In the pandemic there is a rush to deploy solutions that is a commendable change of pace but at what cost?

We should have authentic conversations here and I'm looking forward to discussing the topics above and many more with you.

Ricahrd Jones

shutterstock_640816636 (1).jpg

0 reactions so far

Really good questions! Looking forward to sharing insights. Very excited about our collaboration @Richard Jones @Clive Flashman

I'm really keen to explore how do we know that AI is safe? And also, how AI can make us safer. We know that diagnostic errors is a huge issue and frankly not one that is getting enough attention in the patient safety community.  Too big and scary an issue?

This is a useful intro and refers to the monumentally good and scary IOM report in 2015 on Diagnostic Errors. https://psnet.ahrq.gov/primer/diagnostic-errors

Definitely something to find out what's going on in this area and what more is needed.

Brilliant to have you on board. Let's get the LinkedIn groups engaged too

Helen

0 reactions so far

I do often wonder in the hype to join the AI bandwagon, where every new solution seems to have 'AI inside', how much of it really is AI?
In the early days of AI (5-7 years ago), new solutions required expert clinicians to spends 6-9 months training them, which of course impacted on the amount of time they could then give to their patients. With ML, is this really changing? Can an AI that improves through ML be as good as one that is 'taught' by clinicians? Is it as safe?

0 reactions so far

I think there is potential to develop scenarios far quicker and more tailored to particular situations.  So for example, you can create AI based images in bulk to show a clinician far more cases than they would normally see and build systems to keep people up to date and up to scratch.  You can build subtler cases in bulk to help discrimination between different cases of an illness or disease.  You can create synthetic data sets to test medical software and build in whatever bias you need to truly test something by packing the data with suitable case profiles while actual anonymised data may have only a handful.  So there's lots of potential.

But true AI software can modify itself and will not always give the same answer... so we need to be careful about the application and also remember the basics.  Surgeon told me a story about a relative who had died and their x-ray.  He asked the doctor who looked after his mother to comment on the x-ray which they did.  Lots of comments on thumbprint marks etc. but actually completely failed to notice the name on the x-ray was not the name of the relative.  It wasn't their x-ray.  So as smart as we get.. we still need discipline and people like PSL helping staff set the right standards, do the right thing and be able to point out poor practice in a safe way.  

1 reactions so far

Thanks for the positive endorsement. Of course that leads to the ethics/ governance question...... if an AI makes an incorrect diagnosis, who takes ownership of that mistake? The clinician, the Trust where they work, the developer/ implementer of the AI???

Still lots of questions to be answered, but as you say, HUGE potential for improvement.

0 reactions so far

Safety is always a systems issue! Need to design safety into the development  of new ways of working and be clear how to assess variance - to learn about refinement of the product/process, how to ensure effective and safe implementation and with transparent reporting and learning - if things go wrong and from good practice.

Are their patient safety design standards for software development?

0 reactions so far

I think its important to consider how biases (research, implicit etc) can affect the success of AI. both in the results and how it is interpreted.  

 

0 reactions so far

Absolutely.  Also there is the rush to apply things at present which perhaps erodes some of the safety processes.

Your point is why I was involved in a project to deliver synthetic data to then test software against a dataset that would highlight the efficacy or otherwise of the results.  

0 reactions so far

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...