quote HEE facebook linkedin twitter bracketDetail search file-download keyboard-arrow-down keyboard-arrow-right close event-note

You are here

New report finds healthcare staff need support to use AI safely

19 May 2022

Frontline healthcare staff will need bespoke and specialised support before they will confidently use artificial intelligence (AI) in their clinical practice, a new report has found today.

The NHS aims to be a world leader in the use of emerging technologies like AI that could support trusts to address the backlog in elective procedures and make a difference in helping to detect and manage conditions, such as cancer and cardiovascular disease, earlier.

A new report published today by Health Education England and the NHS AI Lab has found that if patients across the country are to benefit from AI, healthcare workers will need specialised support to use AI safely and effectively as part of clinical reasoning and decision-making.

The vast majority of clinicians are unfamiliar with AI technologies and there is a risk that without appropriate training and support, patients will not equally share in the benefits offered by AI.

The report calls for clinicians to be supported through training and education to manage potential conflicts between their own intuition or views about a patient’s condition and the information or recommendations provided by an AI system.

For instance a clinician may accept an AI recommendation uncritically, potentially due to time pressure or under-confidence in the clinical task, which is a tendency referred to as automation bias.

Deploying AI in a health and care setting will require changes in the ways that the workforce operates and interacts with technology. A second report, to be published later this year, will further clarify the educational pathways and materials needed to equip the workforce, across all roles and levels of experience, to confidently evaluate and use AI. 

Dr Hatim Abdulhussein, National Clinical Lead for AI and Digital Medical Workforce at Health Education England, said: “Understanding clinician confidence in AI is a vital step on the road to the introduction of technological systems that can benefit the delivery of healthcare in the future. 

“Clinicians need to be assured that they can rely on these systems to perform to levels expected  to make safe, ethical and effective clinical decisions in the best interests of their patients.”

Brhmie Balaram, Head of AI Research and Ethics at the NHS AI Lab said: “AI has the potential to relieve pressures on the NHS and its workforce; yet, we must also be mindful that AI could exacerbate cognitive biases when clinicians are making decisions about diagnosis or treatment. It is imperative that the health and care workforce are adequately supported to safely and effectively use these technologies through training and education. 

“However, the onus isn’t only on clinicians to upskill; it’s important that the NHS can reassure the workforce that these systems can be trusted by ensuring that we have a culture that supports staff to adopt innovative technologies, as well as appropriate regulation in place.”

The report argues that how AI is governed and rolled out in healthcare settings can affect the trustworthiness of these technologies and confidence in their use. It outlines the many factors that can affect the workforce’s confidence in using AI, including the leadership and culture within their organisations, as well as clear nationally driven regulation and standards.

Today’s report and partnership with HEE is part of the NHS AI Labs’ AI Ethics Initiative, which was introduced to support research and practical interventions that can strengthen the ethical adoption of AI in health and care.