AI in healthcare: learning the lessons of the past

Ai in healthcare: learning the lessons of the past

Policy

Following the Government’s recent announcement of the AI Life Sciences Accelerator Mission and ahead of this week’s AI Safety Summit, Elizabeth Beck, Director, Incisive Health, explores the lessons we can learn from our past, as policymakers grapple with what AI means for the NHS’s future.

AI is often pitched as the new frontier for healthcare – full of ground-breaking possibility, risk, and uncertainty. However, a closer examination reveals that this new technology faces many old challenges (along with some new ones of course). To bridge the gap between rhetoric and reality, the system will need to learn the lessons of recent history on overcoming barriers to the adoption of breakthrough technologies… or be doomed to repeat it.

With the AI Safety Summit kicking-off this week and the Prime Minister’s recent announcement of the AI Life Sciences Accelerator Mission, here are three areas where we can learn from the past to inform how we go forward with AI in the future:

  1. Regulate to avoid unintended bias

    It may be a new technology, but AI faces the very old challenge of unintended bias. Just as clinical trials have historically built in bias by underrepresenting women, the data AI systems are trained on determines how effective the AI is. Put simply, training a system to spot early cancer symptoms using data from all an all-male sample will create a tool that is more accurate for men than women.

    At a global level, policymakers are working to address this challenge, with UNESCO putting gender equality at the heart of its Recommendation on the Ethics of Artificial Intelligence in 2021 and launching its Women4Ethical AI expert platform in May this year.

    Here in the UK, the MHRA’s Roadmap committed to clarifying and expanding the 2021 basic principles of good machine learning practice to provide a high level framework to ‘identify, measure, manage, and mitigate bias’. Likewise, the MHRA will be considering statistical and machine learning methods to detect, measure, and correct for bias in datasets – but we are yet to understand what this will fully look like in practice.

    It is promising regulators have identified this challenge relatively early on the journey towards widespread use of AI in health. However, as persistent challenges with clinical trials shows us, knowing there is a problem and resolving the problem are two very different things. As such, there is an opportunity for the MHRA to take a position of global leadership on this; building on the Prime Minister’s commitment to work with international regulators to provide clear signals to innovators on standards.

  2. Bring the workforce with you

    A Health Foundation survey just three years ago of 1,000 NHS staff found that three-quarters of respondents have not heard, seen or read ‘very much’ or ‘anything at all’ about automation and AI.

    The much anticipated, Long Term Workforce Plan acknowledges that harnessing the opportunities of AI requires staff to build their capability. But beyond committing the Government and NHS England to convening “an expert working group to work through in more detail…what steps need to be taken so that it supports NHS staff in coming years” it does not provide much further information on what that support will be or how it will be offered.

    In answering these questions, it would be wise to look at how the NHS has approached similar challenges in the past. The task of building clinical confidence and trust in innovations is not new. Although AI presents unique challenges, many of the barriers to adoption, including clinical attitudes, workflow integration and the need for multi-disciplinary efforts are the same. To ensure buy-in, policymakers must focus on showcasing how AI can support staff to add clinical value rather than replacing them.

  3. Prioritise infrastructure

    The level of digital and data infrastructure across the NHS varies significantly. Without a mature digital infrastructure that enables data sharing at scale, the NHS will never be able to leverage the potential benefits AI could bring to the health system.

    Most AI models rely on cleansed, well structured, tagged data sources in order to be effective. To achieve this at the scale of the NHS will require significant investment and coordination on the infrastructure and other enabling data management and integration capabilities.

    Again, infrastructure investment is not a unique challenge for AI or a new problem for the NHS.

    The recently announced £100 million fund to capitalise on AI in healthcare (much like the £250 million fund announced in 2019 as part of the National Artificial Intelligence Lab) promises to bring together government, industry, the NHS, and academia to work at speed to solve some of our greatest health challenges. While the headlines from these announcements focus on the innovative technologies that will help us achieve these ambitions, the investment will need to be targeted on the arguably less exciting, but equally important, data-infrastructure that sits behind them.

The most common challenge

There is then of course arguably the most common challenge of all for any new activity in the NHS – securing sustained focused political commitment, clinical will and the long-term funding.

There is agreement across political parties that AI has the potential to be a much-needed force for good in the NHS. But realising this potential will require not only sustained funding, but also expertise on how to mitigate risks, bring the NHS workforce as well as the general public on a journey of change and plan ahead.

This week’s AI Safety Summit may touch on some of these challenges, but given the scale of the changes required for the AI Mission to achieve its ambitious aim of tackling ‘the biggest health challenges of our generation’ more will certainly need to be done.