AI in the NHS: From potential to practice

The NHS has the ambition, the data, and the investment. So why is AI adoption still so slow? CHASE explores how to close the gap between technology and patient benefit.

March 11, 2026
AI data and a stethoscope

The NHS 10-year health plan commits to making it the world's most AI-enabled health system. Government investment and policy intent are there, in principle. And the technology is impressive, with algorithms that can read imaging, flag deteriorating patients before clinicians spot the signs, accelerate drug discovery, and cut documentation time for overstretched GPs. But a closer look at the evidence reveals that the limiting factor is rarely the technology itself. There is a gap between what AI can do and what it is actually doing in the NHS, a gap that won’t close on its own.

In this article, we discuss the assets the NHS holds, the systemic barriers that slow adoption, and what it will take to move from promising pilots to sustained, system-wide impact.

The data advantage and its limitations

The NHS holds one of the most valuable health datasets in the world. Decades of primary care records, prescribing data, diagnostics, and secondary care episodes covering millions of patients across a universal health system that is a real, strategic asset for AI development. Other countries, including those with larger AI markets, don’t have a comparable foundation.

But the data was designed for care delivery, not for training algorithms. It is fragmented across systems that rarely talk to each other. It contains inconsistencies with birth dates that haven't happened, patients who appear to live indefinitely in perfect health (most likely visitors or people who have moved and never deregistered), and records that reflect inequalities in who accesses the system. Those who face barriers to healthcare are underrepresented. Those who attend frequently are over-represented. If AI tools are trained on these skewed datasets without careful handling, they will entrench inequity rather than reduce it.

The governance structures that govern data access are also, by common consensus, not fit for purpose at the pace the field requires. When it can take 12 months to access the data needed to develop and test a model, innovation does not scale. The Health Data Research Service, a £600 million investment from government and the Wellcome Trust, is intended to act as a front door to data access for the research community and beyond. Federated learning approaches, which allow models to be trained across sites without data ever leaving the NHS trust that holds it, are also gaining traction. These are meaningful steps forward. But the infrastructure is still maturing, and the workforce being trained on current platforms may find that those platforms look quite different in two years.

There is also a less-discussed opportunity: AI can help improve the very data it then depends on. Identifying implausible records, reconciling inconsistencies, and standardising coded information at scale are all tasks well suited to AI, and doing them consistently could free up significant human resource for the judgement and care that machines cannot replicate.

Regulation: Building the right framework

For a long time, the regulatory environment around AI in healthcare has been described as uncertain but this is starting to change. The MHRA launched the AI Airlock in April 2024, the UK's first regulatory sandbox for AI as a medical device, designed to test real products against the current framework and identify where regulation needs to evolve. Its second phase, running to March 2027, expands the programme across seven additional technologies spanning clinical notetaking, cancer diagnostics, and eye disease detection.

In parallel, a National Commission on the Regulation of AI in Healthcare is working to advise on a new regulatory framework, with recommendations due in summer 2026. The commission explicitly includes patient advocates, technology firms, clinicians, and health management: a deliberate signal that regulation in this space cannot be the preserve of regulators alone.

The UK's regulatory philosophy is worth noting. The current medical device framework is largely 40 years old and not designed for adaptive AI systems that learn over time. The aspiration is to find what one commission chair described as the "right place for the bar": not so high that it deters innovation or drives companies to more permissive markets, not so low that patient safety is compromised. A regulatory approach that allows technologies to enter an early-adoption phase with additional safeguards, gathering real-world evidence while remaining in use, could make the UK an attractive destination for AI developers rather than a deterrent.

One consistent theme across the regulatory conversation is the need to address demographic bias. Manufacturers are now expected to demonstrate that their tools are safe for the diverse population they serve, not just on average. Skin cancer diagnostics are frequently cited as a cautionary example: several tools on the market perform well for lighter skin tones but have not been adequately tested in darker skin types. Good regulation makes that gap visible and requires it to be closed, rather than accepted.

Safety, trust, and the automation question

Patient trust in AI is often framed as a communications problem. In practice, it is an accountability problem. Patients are willing to have their data used when the purpose is clear, the safeguards are demonstrated, and there is meaningful oversight of who accesses what and why. Restricting data use does not build trust but demonstrating benefit and fairness does.

For clinicians, the picture is more complicated. Automation bias, the tendency to accept AI-generated output without sufficient scrutiny, is a real risk, particularly in high-pressure environments. A GP with eight-minute appointments who accepts an AI-generated clinical note without reviewing it is not reckless; they are human. The question is whether the systems around them are designed to make careful review feasible, and whether accountability is clearly assigned when errors occur.

Liability is still an evolving area. The law around AI is in active development, and multi-party litigation, which involves the clinician, the organisation, the software developer, the data provider, and the systems integrator, is likely to be lengthy and complex. In the meantime, the practical guidance is clear: good governance, documented decision-making, vendor due diligence, and audit trails are the most meaningful protections currently available to healthcare organisations.

There is also a less-discussed risk on the other side. Excessive caution, requiring AI to meet a standard of perfection that human clinical decision-making never achieves, is not a neutral position. It delays the use of tools that could prevent harm. Patients are missed, misdiagnosed, and undertreated every day by a system operating without the benefit of AI. The question is not whether AI is perfect; it is whether it is better than the alternative, and by how much. A model that is 98% accurate in a task where the human benchmark is substantially lower should prompt a different conversation about the 2% failure mode than continued debate about whether to deploy it at all.

Training and the workforce gap

One of the more striking examples to emerge from discussions about AI adoption concerns not a patient outcome, but a training moment. In a hospital in Germany, a senior clinician asked a junior trainee why they had not recorded a tumour measurement in a patient's notes. The trainee's response: they did not know what the measurement was because the AI had not generated it, and they had never learned to take it themselves.

This is not an argument against AI in clinical training. It is an argument for thinking carefully about what trainees need to understand, and be able to do, before and alongside AI tools, not instead of them. When AI tools go down (and they do, as any Trust that has deployed one will attest), clinicians need to be able to function without them. When an AI flags something unexpected, a clinician needs to know whether to trust it. When an algorithm drifts after a software update and starts missing cases it previously caught, someone needs to notice.

Building digital literacy into clinical training, not as a standalone module but as a thread through every level of education and continuing development, should be viewed as foundational for safe adoption.

The scaling problem: Moving beyond pilots

The NHS is not short of AI pilots. What it lacks is a reliable mechanism for turning a successful pilot in one Trust into standard practice across the system. The pattern is familiar: a tool is deployed, it works, local clinicians are enthusiastic, findings are published and then a neighbouring Trust starts the same journey from scratch, running its own procurement process, its own governance review, and its own implementation study.

The consequence is variation that mirrors existing inequalities. Teaching hospitals with strong academic links tend to be early adopters. District general hospitals, where the majority of people receive care across their lives, often lag behind. The capacity released by AI in one part of the system is not automatically redirected to the parts that need it most.

Multiple voices in this space are calling for AI adoption to be treated as a national service transformation programme, unified in direction, locally delivered, rather than a series of isolated innovations. That framing has precedent: the Covid vaccination rollout demonstrated what is possible when national infrastructure, local delivery, and clear accountability work together. The challenge is to apply that logic to a technology landscape that is more varied and moves considerably faster.

Integrated Care Boards (ICBs) have a role here, as do Health Innovation Networks. But the coordination mechanisms are still developing, and the tension between national standardisation and local adaptation remains unresolved. A fracture detection tool that has been validated and deployed safely across 100 GP practices should not need to go through the same full governance process for the 101st. Yet in many cases, it does.

What this means for life sciences and the NHS

The opportunity for the NHS is real but so is the risk of losing it through fragmented deployment, under-resourced implementation, and governance structures that were not designed for the pace of change.

For pharmaceutical and medtech organisations, the implications are practical. Technologies that work in a lab or a clinical trial will only reach patients if the implementation pathway is designed as carefully as the technology itself. Workflow integration, electronic health record connectivity, clinician training, and governance infrastructure make all the difference between a product that changes practice and one that gathers dust in a pilot report.

CHASE works at exactly this interface, connecting life science organisations and the NHS with the people, insight, and operational expertise needed to design and deliver programmes that make a genuine difference. If you are working through how to navigate NHS AI adoption, access the right stakeholders, or build a compliant and effective NHS-industry partnership, speak to the CHASE team.

Browse other insights

Explore our latest thinking, event updates and industry insights to stay informed.

All resources

VPAG 2026: A deal is done, but the hard work starts now

Industry Insights
March 25, 2026

The new VPAG rate of 14.5% takes effect on 1 April 2026. CHASE analyses what it means for pharmaceutical companies operating in the UK.

Read story

Making the case for value: Health economics and what it means for the NHS, life sciences, and patients

Industry Insights
March 18, 2026

Health economics connects NHS decision-making to pharma & medtech innovation and can help get treatments to patients. What does it mean in practice?

Read story

The evolving role of the Practice Manager in NHS–industry collaboration

Industry Insights
February 25, 2026

GP surgeries today often function as multi-million-pound organisations. To engage primary care, Pharma & medtech need to know how to talk to Practice Managers.

Read story