Jump to content

AI standards need to be created to protect patient safety, experts warn

Appropriate methods and standards around artificial intelligence (AI) need to be created to protect patient safety, experts have said. Responding to the Government’s pledge of £250 million for a National Artificial Intelligence (AI) Lab, Matthew Honeyman, researcher at The Kings Fund, said the NHS workforce needs to be equipped with digital skills for the benefits of new technologies to be realised.

“AI applications are in development for many different use cases – from screening, to treatment, to admin work – there needs to be appropriate methods and standards developed for safe deployment and evaluation of these solutions as they enter the health system,” he told Digital Health.

Adam Steventon, Director of Data Analytics at the Health Foundation, said the commitment was a “positive step” but that technology needs to be driven by patient need and “not just for technology’s sake”. “Robust evaluation therefore needs to be at the heart of any drive towards greater use of technology in the NHS, so that technologies that are shown to be effective can be spread further, and patients protected from any potential harm,” he said.

Read full story

Source: Digital Health, 9 August 2019



Recommended Comments

There are no comments to display.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...