Summary
Healthcare is where the "most exciting" opportunities for artificial intelligence (AI) lie, an influential MP has said, but is also an area where the technology's major risks are illustrated.
Greg Clark, chairman of the Commons Science, Innovation and Technology Committee (SITC), said the wider adoption of AI in healthcare would have a "positive impact", but urged policy makers to "consider the risks to safety". He said: "If we're to gain all the advantages, we have to anticipate the risks and put in place measures to safeguard against that."
An interim report published by the Science, Innovation and Technology Committee sets out the Committee’s findings from its inquiry so far, and the twelve essential challenges that AI governance must meet if public safety and confidence in AI are to be secured.
Content
The twelve challenges of AI governance that must be addressed by policymakers:
- The Bias challenge: AI can introduce or perpetuate biases that society finds unacceptable.
- The Privacy challenge: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.
- The Misrepresentation challenge: AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.
- The Access to Data challenge: The most powerful AI needs very large datasets, which are held by few organisations.
- The Access to Compute challenge: The development of powerful AI requires significant compute power, access to which is limited to a few organisations.
- The Black Box challenge: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.
- The Open-Source challenge: Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
- The Intellectual Property and Copyright Challenge: Some AI models and tools make use of other people's content: policy must establish the rights of the originators of this content, and these rights must be enforced.
- The Liability challenge: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
- The Employment challenge: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.
- The International Coordination challenge: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.
- The Existential challenge: Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security.
0 Comments
Recommended Comments
There are no comments to display.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now