Jump to content

Search the hub

Showing results for tags 'mHealth'.

More search options

  • Search By Tags

    Start to type the tag you want to use, then select from the list.

  • Search By Author

Content Type


  • All
    • Commissioning, service provision and innovation in health and care
    • Coronavirus (COVID-19)
    • Culture
    • Improving patient safety
    • Investigations, risk management and legal issues
    • Leadership for patient safety
    • Organisations linked to patient safety (UK and beyond)
    • Patient engagement
    • Patient safety in health and care
    • Patient Safety Learning
    • Professionalising patient safety
    • Research, data and insight
    • Miscellaneous


  • Commissioning, service provision and innovation in health and care
    • Commissioning and funding patient safety
    • Digital health and care service provision
    • Health records and plans
    • Innovation programmes in health and care
    • Climate change/sustainability
  • Coronavirus (COVID-19)
    • Blogs
    • Data, research and statistics
    • Frontline insights during the pandemic
    • Good practice and useful resources
    • Guidance
    • Mental health
    • Exit strategies
    • Patient recovery
    • Questions around Government governance
  • Culture
    • Bullying and fear
    • Good practice
    • Occupational health and safety
    • Safety culture programmes
    • Second victim
    • Speak Up Guardians
    • Staff safety
    • Whistle blowing
  • Improving patient safety
    • Clinical governance and audits
    • Design for safety
    • Disasters averted/near misses
    • Equipment and facilities
    • Error traps
    • Health inequalities
    • Human factors (improving human performance in care delivery)
    • Improving systems of care
    • Implementation of improvements
    • International development and humanitarian
    • Safety stories
    • Stories from the front line
    • Workforce and resources
  • Investigations, risk management and legal issues
    • Investigations and complaints
    • Risk management and legal issues
  • Leadership for patient safety
    • Business case for patient safety
    • Boards
    • Clinical leadership
    • Exec teams
    • Inquiries
    • International reports
    • National/Governmental
    • Patient Safety Commissioner
    • Quality and safety reports
    • Techniques
    • Other
  • Organisations linked to patient safety (UK and beyond)
    • Government and ALB direction and guidance
    • International patient safety
    • Regulators and their regulations
  • Patient engagement
    • Consent and privacy
    • Harmed care patient pathways/post-incident pathways
    • How to engage for patient safety
    • Keeping patients safe
    • Patient-centred care
    • Patient stories
  • Patient safety in health and care
    • Care settings
    • Conditions
    • Diagnosis
    • High risk areas
    • Learning disabilities
    • Medication
    • Mental health
    • Men's health
    • Patient management
    • Social care
    • Transitions of care
    • Women's health
  • Patient Safety Learning
    • Patient Safety Learning campaigns
    • Patient Safety Learning documents
    • Patient Safety Learning news archive
    • 2-minute Tuesdays
    • Patient Safety Learning Annual Conference 2019
    • Patient Safety Learning Annual Conference 2018
    • Patient Safety Learning Awards 2019
    • Patient Safety Learning Interviews
    • Patient Safety Learning webinars
  • Professionalising patient safety
    • Accreditation for patient safety
    • Competency framework
    • Medical students
    • Patient safety standards
    • Training
  • Research, data and insight
    • Data and insight
    • Research
  • Miscellaneous


  • News

Find results in...

Find results that contain...

Date Created

  • Start

Last updated

  • Start

Filter by number of...


  • Start



First name

Last name


Join a private group (if appropriate)

About me



Found 5 results
  1. Content Article
    1. Regulating adaptive AI algorithms Where an AI tool quickly adapts to reflect its environment and the context in which it operates, the AI may “reinforce those harmful biases such as discriminating based on one’s ethnicity and/or gender”. These will further exacerbate existing health inequalities and place certain patients at a disadvantage. It is important that the ground rules for these AI tools include firm parameters that seek to prioritise patient safety. A bit like Asimov’s Zeroth Law, ”a robot may not harm humanity, or, by inaction, allow humanity to come to harm”. 2. Hacking medical devices remotely The idea that hackers might target people's implantable cardiac devices was popularised in a 2012 episode of the US television drama ‘Homeland’, in which terrorists hacked a fictional vice president's pacemaker and killed him. It is not just VIPs (or VPs) who need to worry about this. Potentially anyone with an implanted device could have it hacked and be held to ransom. Medical device manufacturers should take far more care in the security that they build into their devices to protect patients from unwarranted attacks on them. Frankly, when large healthcare organisations are procuring these types of devices, this is one of the key areas that they should be interrogating their potential suppliers about. 3. Privacy breaches by and on direct-to-consumer devices and services This is a difficult one because if we want digital systems to really understand us and provide advice or treatment personalised to us, then those digital tools must have access to our confidential medical data. However, privacy is still very much a high priority for most patients and they (rightly) want to know what is happening to their data – who is using it, how long is it being held, is it being passed on to third parties without the patient’s explicit consent? People often forget who they have given access to their data, for what purpose and sometimes stop using a digital tool without realising that all of their data is still being held (and possibly collected via an active API) by the digital tool’s supplier. It would be helpful if our mobile phones and PCs could highlight: a. When we shared sensitive data, who with, and what data was shared. b. A list of active APIs that are still sharing our data, etc. Data that is used for purposes other than those intended by the patient are potentially a safety risk to that patient and should be treated as such. 4. Ransomware attacks on hospitals Yes, this is awful for the hospital, and yes, it may cost them money; however, let’s not forget whose data has been stolen, the patients’! Are they sufficiently alerted to this, told what is happening, given ways to mitigate any issues to them personally? In an ideal world they are, but in reality the hospital is probably in panic mode and communicating transparently with patients is low down on its priority list. As the Medical Futurist says: “The average patient should demand more security over their data” – but how do they do this? What can a single patient do to ensure that the hospitals who have stewardship over their data (not ownership in my opinion) make it as secure as possible. This brings me back to an idea that my sadly departed friend, Michael Seres, had many years ago. On each hospital exec team (not Board) there should be a Chief Patient Officer, whose job it is to push for patient interests in operational matters (which is why they shouldn’t be a non-exec member of the Board). That is the person whose job it should be to hold their organisation to account over the security of their patients’ data. 5. Technologies supporting self-diagnosis Dr Google has been an issue for some years, and people’s off-the-shelf devices that monitor their vital signs are not necessarily medical grade, nor do their users generally have the skill to interpret the outputs from them. However, doctors should embrace patients who are keen to manage their own chronic conditions and support them in doing so. This ‘shared accountability’ has to be the model for improved population health and doctors not willing to work with their patients shouldn’t have any. 6. Bioterrorism through digital health technologies A bit exotic this one and certainly not a near-term risk when looking at the sorts of things described in the newsletter. However, in a world that is still dealing with a pandemic, and reliant on vaccines to gain some normality back into our everyday lives, the security of (for example) that supply chain is critical. What if a batch was intentionally sabotaged or in some way its efficacy reduced? In exactly the same way that medical products (especially implants) should be made as safe and secure as possible, the same is true for the medicines that we rely on. 7. AI not tested in a real-life clinical setting The newsletter makes the case for issues related to how staff use the AI, but PLEASE… test this with patients first! Safety in use is critical and only feedback involving patients will help developers to optimise these digital tools to be as safe as possible. 8. Electronic medical records not being able to accommodate patient-obtained digital health data This is a very personal issue for me. Why should my doctor have to send me for tests when I can give him/her perfectly reasonable data that I have gathered myself from a device that has been CE marked and approved by the FDA/MHRA etc.? Electronic Medical Record vendors are incredibly reticent to allow anyone other than the authorised doctor to enter anything into a patient’s record. There are some good reasons for this. However, I’ve long thought that there could be an annexe to the record that is patient-controlled where they can enter a new address, add data from their own blood pressure device and over-the-counter drugs or remedies that they are taking. That way, doctors would have an up to date, (hopefully) reliable set of data to have a more informed discussion with their patient and it could accelerate the time between consultation and referral/treatment. 9. Face recognition cameras in hospitals I’m less worried by this in principle; however, I am interested to know how the data generated will be used and the security around it. If it is only used by the hospital to optimise patient flow, or remotely detect symptoms that are then used to help patients either directly or indirectly, then fine. If it is shared with others for more sinister purposes, then I would be concerned. 10. Health insurance: Dr Big Brother This is less relevant to the UK – only 11% of us have private health insurance. Again, this boils down to who collects data on patients, for what purposes, is explicit consent gained from the patient to share their data and how may those third parties use it? There are both negative and positive connotations to the gathering of a person’s health data by their health insurance company, but given that they already ask for access to all GP and secondary care records, having access to health wearable data (as Vitality Health already does) is not a big step. Conclusion I still believe that the benefits of digital health outweigh the risks, but the risks outlined above are not inconsequential. Many of the negative aspects are predicated on poor management and control of patient data. One of the ways that this should be mitigated is to have one or more patient representatives at an exec (not non-exec) level who hold healthcare organisations to account over this important aspect of care provision.
  • Create New...