Skip to content
Freeths - Law firm
digital faces behind green tech lines demonstrating AI
Articles IT & Data 10th Jan 2024

UK NCSC publishes guidelines for secure AI system development.

The guidelines, which were created to put cyber security “at the heart of AI development at every stage”, were published at the end of November 2023. In this article, Commercial & Technology Associate, Josh Day, provides an outline of the key features of the guidelines and what this means for AI systems providers.

Overview

On 27 November 2023, the UK National Cyber Security Centre (NCSC) in cooperation with 21 agencies across the globe, including the U.S. Cybersecurity and Infrastructure Security Agency (CISA), published guidelines aimed at encouraging the secure creation of Artificial Intelligence (AI) systems.

The guidelines, which have been marked by the Director of CISA as a “key milestone in collective commitments” towards AI systems development, are aimed towards those providers who are using AI models: (i) hosted by an organisation, or (ii) through external application programming interfaces (APIs).

Following a ‘secure by default’ approach (the process of tackling a security problem at its root cause, rather than treating its ‘symptoms’), the guidelines are closely aligned to existing practices implemented by its international collaborating partners. The guidelines are therefore structured to prioritise:

  • Taking ownership of security outcomes for customers,
  • Embracing transparency and accountability, and
  • Building and structuring leadership to ensure that ‘secure by design’ is a top business priority.

Do the guidelines apply to all forms of AI?

As a preface, the guidelines make clear that ‘AI’ for the purposes of its interpretation should be understood as machine learning (ML) applications. These are applications that:

  • Involve software components that allow computers to recognise and bring context to patterns in data without the rules having to be explicitly programmed by a human, and/or
  • Generate predictions, recommendations, or decisions based on statistical reasoning.

The guidelines therefore currently do not apply to non-ML AI applications, including rule-based systems.

What do the guidelines look like?

The guidelines are broken down into four areas that are recognised as being ‘key’ within the AI system development lifecycle. These include:

  1. Secure design
  2. Secure development
  3. Secure deployment
  4. Secure operation and maintenance

In each area, the guidelines offer various considerations (and mitigations) which are aimed to reduce the overall risk to the AI system development process.

Secure design

The first section of the guidelines set out considerations that apply at the design phase of the AI system development lifecycle. It offers an overview to understanding risks and threat modelling, particularly the importance of system owners and senior leaders raising staff awareness to threats and risks.

The latter part of this section offers guidance in the form of which security benefits and trade-offs for providers should consider when selecting an AI model, including:

  • The complexity of the model,
  • The appropriateness of the model for its intended use case, and
  • The ability to explain the models’ outputs.

Secure development

During the development stage of the AI system development lifecycle, AI systems providers are advised to note the following key considerations:

  • Securing supply chain: requiring suppliers to adhere to the same standards that an organisation applies to its other software.
  • Identifying, tracking and protecting assets: having an understanding of where assets reside, treating logs, prompts, assessments and documentation as sensitive data, and having processes to manage what data AI systems can access.
  • Documenting data, models and prompts: maintaining full documents relating to the creation, operation and lifecycle management of any models, datasets and meta-data.
  • Managing technical debt: identifying, tracking and managing all ‘technical debt’ (where engineering decisions fall short of best practices) in order to assess, acknowledge and mitigate future risk.

Secure deployment

The guidelines make several recommendations that providers should consider when deploying AI systems. Whilst these mainly focus on maintaining good security and protection in relation to the system (including incident management procedures), the guidelines also propose considerations in respect of end-users.

A key recommendation concerns the releasing of AI in a responsible manner. The guidelines highlight the importance of not releasing AI models, applications or systems until they have been subjected to appropriate (and effective) security evaluation, and that users are made clear about the limitations and potential failures of the AI system being released.

Secure operation and maintenance

Looking ahead to the use of AI systems post-deployment, AI systems providers are reminded of actions that should be taken to ensure their secure operation and maintenance.

Examples covered by the guidelines in this section include: (i) monitoring a system’s behaviour, (ii) monitoring a system’s inputs (including queries and prompts), (iii) collecting and sharing lessons learned, and (iv) following a ‘secure by design’ approach.

Key Takeaways

Whilst AI systems are becoming widely recognised as having the potential to bring many benefits to society, they remain subject to many novel vulnerabilities.

The publishing of the guidelines is therefore most timely. It not only increases awareness of the many risks surrounding AI security but offers recommendations that will be key for AI systems providers and organisations to consider as the development and implementation of AI systems into business operations proliferates in the coming months ahead.


If you have any queries or would like further information surrounding the development or regulation of AI please get in touch with Joshua Day or another member of our Data Protection team.


The content of this page is a summary of the law in force at the date of publication and is not exhaustive, nor does it contain definitive advice. Specialist legal advice should be sought in relation to any queries that may arise.

Client service

‘Doing the right thing’ is at the heart of Freeths. Find out more about our excellent client service and the strong set of values that guide the way we work.

Our values

arrow

Talk to us

Freeths are a leading national law firm with 13 offices across the UK. If you have a query about our services or just want to find out more, why not give us a call?

Contact: 03301 001 014

Choose an office:

Portfolio close
People CV Email

Remove All


Click here to email this list of people to a colleague.