1. Regulating adaptive AI algorithms
Where an AI tool quickly adapts to reflect its environment and the context in which it operates, the AI may “reinforce those harmful biases such as discriminating based on one’s ethnicity and/or gender”. These will further exacerbate existing health inequalities and place certain patients at a disadvantage. It is important that the ground rules for these AI tools include firm parameters that seek to prioritise patient safety. A bit like Asimov’s Zeroth Law, ”a robot may not harm humanity, or, by inaction, allow humanity to come to harm”.
During the initial impact of the COVID-19 pandemic, the Government recognised that a key enabler would be to increase capacity within the NHS, ensuring that enough acute beds were available to cope with the rising tide of patients. An important policy priority has been to ensure the safe discharge of patients back into their home or, where appropriate, into a placement with a community provider. While there were already pathways in place to accelerate this process, responding to the pandemic required a significant acceleration of hospital discharges.
Hospital discharges are complex. To en