Lorem

In March, the European Union Agency for Cybersecurity (ENISA) released a report, “Cybersecurity of AI and Standardisation,” which details the landscape of existing, planned, and considered standards pertaining to the cybersecurity of artificial intelligence (AI). The report identifies several gaps in existing approaches to protection of digital infrastructure and provides actionable recommendations in order to mitigate them, including the adoption of technical standards to be applied in the creation of cybersecurity infrastructure when deploying artificial intelligence.

The report identifies several problem areas in the existing technical standards landscape surrounding cybersecurity and AI. Particular focus is drawn to the standards associated with the traditional realms of the confidentiality, integrity, and availability (CIA) security paradigm, as well as those aimed at complimenting the proposals included in the European Union’s (EU) draft AI Act. The CIA gaps identified consist of the data and methodologies of AI, including the traceability of data and AI components throughout their life cycles, the lack of understanding of features native to machine learning (ML) in areas such as metrics and testing, and the inability of standards to be adapted to emerging technologies.  The report also notes that there are gaps in the draft AI Act with respect to the intersection of cybersecurity and testing of AI systems.

Developing a standardised approach to cybersecurity of AI

The report discusses the importance of standardization in relation to the cybersecurity of AI, with a focus on ML as the driving force behind AI technologies. It begins by outlining the specificities of machine learning, noting that AI systems will occasionally make wrong predictions. The report goes on to discuss the importance of “explainability” in AI systems, so that decisions made by algorithms can be understood by humans.

A key theme of the report is that it outlines the relationship between AI and cybersecurity, identifying the three dimensions of the relationship between AI and cybersecurity: the cybersecurity of AI, AI used to support cybersecurity, and the malicious use of AI.

The report first focuses on the first dimension, the cybersecurity of AI, and discusses both the “narrow” and “broad” interpretations of this concept. The “narrow” concerns the CIA of AI components, associated data, and processes, while the “broad” refers to AI trustworthiness features, such as explainability, robustness, accuracy, and transparency.

The report then discusses the work of various standardization organizations, such as CEN-CENELEC, ETSI, and ISO-IEC, in developing standards and guidance related to AI and cybersecurity. It notes that many of these organizations are working on standards that address both the narrow and broad interpretations of cybersecurity, while assessing the extent to which the existing standards address cybersecurity concerns, and identifying key gaps.

The report goes on to touch on the role of cybersecurity within the draft AI Act, and identifies gaps in standardization that need to be addressed. It also references several relevant ISO/IEC standards and outlines the requirements for various aspects of AI, such as data quality, risk management, and transparency.

The report concludes with recommendations for organizations, standards-developing organizations, and those preparing for the implementation of the draft AI Act. The recommendations are:

  1. Use a standardized and harmonized AI terminology for cybersecurity, including trustworthiness characteristics and a taxonomy of different types of attacks specific to AI systems.
  2. Develop specific/technical guidance on how existing standards related to the cybersecurity of software should be applied to AI.
  3. The inherent features of ML should be reflected in standards.
  4. Ensure that liaisons are established between cybersecurity technical committees and AI technical committees so that AI standards on trustworthiness characteristics and data quality include potential cybersecurity concerns.
  5. The identification of cybersecurity risks and the determination of appropriate security requirements should rely on a system-specific analysis and, where needed, sector-specific standards.
  6. Encourage R&D in areas where standardization is limited by technological development
  7. Support the development of standards for the tools and competences of the actors performing conformity assessment.
  8. Ensure coherence between the draft AI Act and other legislative initiatives on cybersecurity, notably Regulation (EU) 2019/881 (the Cybersecurity Act) and the proposal COM (2022) 454 for a regulation on horizontal cybersecurity requirements for products with digital elements (the Cyber Resilience Act).

Key takeaways

The ENISA report discusses the importance of standardization in relation to the cybersecurity of AI.

In particular, it covers:

  • The extent to which general-purpose standards can be adapted to AI
  • The need for clarification of AI terms and concepts
  • The importance of guidance on how standards related to cybersecurity should be applied to AI
  • The need for further research and development to address gaps in knowledge and technology
  • The importance of traceability and lineage of data and AI components
  • The need for standards that reflect the inherent features of machine learning
  • The need for a unified approach to trustworthiness and the importance of developing guidance and standards to support cybersecurity in AI systems
  • The need for regulatory coherence between the draft AI Act and other legislation on cybersecurity

Other relevant standards and regulations referenced in the report include:

Find out more

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

DLA Piper continues to monitor updates and developments of AI and its impacts on industry across the world. For further information or if you have any questions, please contact your usual DLA Piper contact.