Abstracts Track 2024


Area 1 - Security, Privacy and Trust

Nr: 63
Title:

An Analysis of the Coverage of AI Cybersecurity in the AI Act, CRA and Standards

Authors:

Stephanie von Maltzan

Abstract: The proliferation of AI marks a paradigm shift in software development. While it is undoubtedly beneficial, AI also poses new risks and challenges and might open new avenues in manipulation and attack methods, while creating new challenges to privacy. Commonly discussed are data protection, copyright and ethical issues. However, cybersecurity is an emerging field that should aim to explore and address AI specific vulnerabilities, including data poisoning or backdoors and, most importantly, adversarial attacks, given the wide variety of AI models and techniques at different levels of maturity. All the more so as legislation has been passed at European level requiring the system as a whole to be cyber-secure as it is being developed. The CRA proposal (Cyber Resilience Act), which will come into force in early 2024, introduces mandatory cybersecurity requirements for hardware and software products throughout their lifecycle before being made available on the market. In addition, compliance with the CRA and the forthcoming AI Act (as well as the already applicable Digital Services Act) and their cybersecurity requirements will necessarily require a security risk assessment that takes into account the internal architecture of the AI system and the intended application context, following the principles of security in depth and security by design in a holistic approach. This poses a number of challenges that need to be addressed. These challenges can be grouped into organisational challenges related to processes, such as harmonising terminologies, threat taxonomies, and definitions across domains and standards, and research and development challenges related to techniques, such as assessing attacks on machine learning models, defining metrics and measures for AI cybersecurity and adversarial robustness of AI models, evaluating trade-offs with other requirements such as between accuracy and cybersecurity. Based on Article 15, Recital 51 and 61 of the AI Act, and given the importance of standards in ensuring the effectiveness of the AI Act, the European Commission has already started the process of adopting a standardisation request, which will provide a formal mandate to European standardisation organisations to develop the necessary standards. This will cover technical areas related to the requirements of the AI Act, as well as conformity assessment of AI systems and quality management systems. In addition to the central reference to the AI Act, for cybersecurity, the AI standardisation request also refers to the CRA. AI-specific cybersecurity standards are beginning to be developed at the international level, notably ISO/IEC 27090 and ISO/IEC 27563, but are not yet available. As two levels of risk assessment should be considered this presentation will first provide a regulatory overview and second, map the cybersecurity requirements against the above mentioned regulatory and policy documents.