Blog
Compliance
NHS DTAC
ISO 27001
NHS DSPT
November 27, 2024
Approx 3 min read

AI and Digital Health: Navigating Regulatory Requirements

Written by
The Naq Team

Artificial intelligence (AI) is reshaping the landscape of healthcare, creating incredible opportunities to improve patient outcomes, streamline workflows, and tackle some of the sector’s most pressing challenges. From AI-driven diagnostic tools that can detect diseases earlier and more accurately to algorithms that personalise treatment plans based on individual patient data, AI is making healthcare smarter and more efficient. For example, AI is revolutionising radiology by identifying anomalies in medical imaging with unprecedented precision and is helping reduce administrative burdens by automating routine tasks like appointment scheduling and patient record management.

But as innovation accelerates, so does the complexity of navigating the regulatory frameworks designed to ensure these groundbreaking tools are safe, effective, and compliant - an essential step for any digital health innovator.

In the latest instalment of our Compliance Therapy Webinar Series, Naq’s expert panel, Global Health Advisor Sam Shah, CSO and Digital Health Safety Expert Yasmin Karsan, Senior Business Development Manager at Oxford Dynamics, Rob Wright, Senior AI Engineer Nicolas Belissent and Naq Co-Founder Chris Clinton, explored the current state of AI compliance in the UK, shared actionable best practices for innovators, and discussed the unique challenges of bringing AI into the NHS. Here’s a closer look at the key insights from the session.

The Current State of AI Compliance in the UK

A key takeaway from the webinar was the absence of a current, standalone regulatory framework for AI in the UK. As Chris Clinton, CTO and Co-founder of Naq, explained, “There are no UK AI regulations, which makes the topic incredibly confusing. However, that will change very soon.”

Currently, the UK relies on a principles-based approach as outlined in the AI Regulation White Paper, published in 2023. This approach delegates responsibility to sector-specific regulators, such as NICE, NHS Digital, and MHRA, to interpret and implement AI guidelines tailored to healthcare.

Chris also highlighted the UK’s approach to leveraging EU regulations, noting that frameworks like the EU AI Act heavily influence UK standards. Early adoption of frameworks like ISO 42001 - an AI-specific management standard - can position innovators ahead of the curve, particularly when considering cross-border operations.

However, the rapid evolution of AI poses challenges for regulation. As Yasmin Karsan, CSO, Digital Clinical Safety Expert and AI Engineer, noted, “Technology grows so fast and changes so quickly…Regulators don’t necessarily keep up with the pace of innovation.”

Best Practices for AI Innovators

Our panel emphasised the importance of embedding compliance into every stage of AI development. Here are some best practices shared during the session:

Prioritise Data Quality and Security

"Understanding the quality, origin, and security of your data is foundational" Explained Nicolas Belissent, Senior AI Engineer at Naq. "Poor data quality or unclear data origins can lead to compliance violations and biased AI outputs." Innovators should work closely with engineering teams to ensure datasets are robust, secure, and reflective of diverse populations.

Build Trustworthy AI

Transparency and explainability are non-negotiable when developing AI tools. Closed-loop systems that provide traceable and repeatable outputs are essential, particularly in healthcare. Rob Wright from Oxford Dynamics likened the process to “installing guardrails” that ensure the safe and secure use of AI tools.

Rob also highlighted the importance of balancing innovation with practicality: “AI tools evolve rapidly, but implementing systems engineering tailored to your organisation can ensure safe and compliant use of these technologies. Whether you’re using AI to streamline workflows or improve decision-making, building robust processes is key to long-term success.”

Embed Compliance Early

“Don’t leave compliance as a bolt-on at the end,” Yasmin advised. Instead, innovators should build compliance into the product development lifecycle. This includes conducting hazard workshops early, incorporating standards like ISO 42001 and ISO 27001, and collaborating with clinicians and engineers to align development with regulatory expectations.

Align with Relevant Standards

While compliance with frameworks like DSPT, DTAC and DCB 0129 is essential, standards such as ISO 42001 and ISO 27001 are becoming increasingly relevant. Chris Clinton pointed out the significant overlap between these standards, making it easier for innovators to achieve compliance with multiple frameworks simultaneously.

Challenges Faced By Innovators

Despite the progress in AI development, innovators face several challenges in navigating compliance:

  • Rapid Evolution of Technology: AI evolves so quickly that products can become outdated by the time they are certified. Yasmin highlighted the need for frameworks to evolve alongside technology to avoid stifling innovation.
  • Cross-Border Regulations: Operating internationally means aligning with different frameworks, such as the EU AI Act, which can add layers of complexity.
  • Knowledge Gaps Among Buyers: As Chris pointed out, many healthcare buyers lack the expertise to evaluate AI tools effectively, leading to potential risks and inefficiencies.

The Role of Sandboxes and Airlocks in Regulation

To address these challenges, regulatory sandboxes have emerged as a vital tool. These controlled environments allow innovators to test AI solutions under relaxed regulatory conditions while working closely with regulators to refine frameworks.

Yasmin highlighted the MHRA Airlock as a promising initiative that expands the sandbox model by incorporating collaboration between NHS England, DHSC, and other stakeholders. “It’s a feedback process,” Yasmin explained. “Innovators and regulators work together to create evidence-based standards that support innovation without compromising safety.”

Rob added that regulatory sandboxes offer a unique opportunity to experiment with cutting-edge technologies while maintaining compliance. “They allow organisations to safely explore the boundaries of innovation, ensuring AI tools meet both current and emerging standards.”

Continuous Compliance Is Key

One of the critical messages from the webinar was that compliance isn’t a one-time exercise. “The purpose of compliance isn’t just to pass an audit,” Chris stressed. “It’s to ensure your solution operates securely, safely, and effectively throughout its lifecycle.”

Rob echoed this sentiment, emphasising the need for continuous improvement: “Compliance frameworks shouldn’t be seen as barriers - they’re opportunities to refine processes, build trust, and future-proof your innovation.”

This means compliance must extend from product development to testing, deployment, and even end-of-life processes, such as data disposal. Continuous compliance not only ensures regulatory adherence but also saves time and resources in the long run.

Found these insights helpful?

Sign up for our newsletter to be the first to know about our upcoming Compliance Therapy webinars, browse our blog for the latest in digital health compliance, or speak to our team if you’re an innovator looking for guidance on navigating your compliance requirements.