Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27608
Title: High-Risk Artificial Intelligence Systems under the European Union’s Artificial Intelligence Act: Systemic Flaws and Practical Challenges
Authors: Gikay, AA
Lau, PL
Sengul, C
Malin, B
Miron, A
Keywords: EU;AI Act,;artificial intelligence;machine learning;high-risk;general principle;low-risk;reasonably foreseeable misuse
Issue Date: 2023
Citation: Gikay, A.A. et al. (2023) 'High-Risk Artificial Intelligence Systems under the European Union’s Artificial Intelligence Act: Systemic Flaws and Practical Challenges', (November 2, 2023).pp. 1 -22. Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4621605 (Accessed: 2 November 2023).
Abstract: The European Union’s (EU) Artificial Intelligence Act (EU AI Act) has adopted a risk-based approach to artificial intelligence (AI) regulation, where AI systems are subjected to different regulatory standards depending on the seriousness of the risk they pose to public interest. High-risk AI systems, the largest category, are subject to strict regulatory requirements imposed throughout their life cycle, ranging from comprehensive conformity assessment to human rights impact assessment and risk management systems. However, the EU AI Act’s high-risk classification system has two systemic fundamental flaws that undermine its ability to strike a fair balance between the risks of various uses of AI technologies and their societal benefits. First, it defines high-risk AI systems through hyper-technical enumeration, potentially excluding certain AI systems from the high-risk category, even if they pose significant risks to public interest. The Act grants the European Commission the power to revise the high-risk category by adding new AI use cases to the list, if they pose similar or greater risks as the existing ones. But the Commission’s power to revise the list does not adequately address the potential loopholes to be created by the restrictive method of defining high-risk AI systems. Second, due to its failure to consider the specific contexts in which AI technologies are used, the EU AI Act could impose disproportionate regulatory burdens on providers and deployers by improperly classifying their AI use cases as high-risk. By using practical examples based on assessment of several real-world use cases of AI technologies conducted in July 2023 during the St. Gallen University First Grand Challenge on the EU AI Act, this paper argues that the EU AI Act requires revision to adequately regulate AI technologies. The paper proposes a solution to address the EU AI Act’s shortcomings, based on the way the law defines high-risk in the context of data protection impact assessment.
URI: https://bura.brunel.ac.uk/handle/2438/27608
Other Identifiers: ORCID iD: Asress A. Gikay https://orcid.org/0000-0002-0778-2821
ORCID iD: Pin Lean Lau https://orcid.org/0000-0002-2447-9293
ORCID iD: Alina Miron https://orcid.org/0000-0002-0068-4495
Appears in Collections:Brunel Law School Research Papers

Files in This Item:
File Description SizeFormat 
Preprint.pdf563.71 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.