Following its approval, the EU’s AI Act will be the world’s first rule on Artificial Intelligence which aims increasing research and industrial capacity whilst providing safety and fundamental rights by making it trustworthy and human-centric. After two years of the legislative proposal, the Parliament agreed on a draft of its position on April 27, 2023, with intense discussions revolving around the new AI applications on the market such as ChatGPT which caused delay within the process.
On May 11, 2023, the leading committees voted on the draft and the largely consolidated file is now subject to the final vote which will take place in mid-June with the possibility that the Act will enter into force in 2023.
In this article, we will be providing an overview of this landmark EU proposal such as the definition of AI according to the AI Act, what the AI Act regulates, and who is affected by the AI Act.
What does the AI Act regulate?
The AI Act covers many AI applications, including but not limited to, biometric identification and categorization systems, critical infrastructure, such as transport and energy systems, educational and vocational training systems, employment, workers, and human resources management systems, law enforcement and judicial systems, marketing and advertising systems, recruitment and human resources systems and social scoring systems.
The AI Act also establishes a European Artificial Intelligence Board responsible for monitoring the application and enforcement of the regulation across the EU.
AI systems are classified in accordance with risk levels, the minimum requirements for ethical principles and certification are set and appropriate monitoring and oversight authorities are also established in order to comply with the requirements and standards of the regulation.
In order to achieve these goals, the Commission proposed a risk-based approach by defining unacceptable risk, high risk, limited risk, and minimal or no risk.
Unacceptable Risk: The situations fall into the scope of unacceptable risk which constitutes a clear threat to the safety and rights of people such as social scoring by governments to toys using voice assistance to encourage dangerous behavior.
High Risk: Such examples might be given including;
- Crucial infrastructure putting the life and health of citizens at risk such as transport,
- Educational or vocational training which may decide on the access to education and Professional course in a person’s life such as scoring exams,
- Safety components of such products (e.g. use of AI in robot-assisted surgery),
- Use of AI in recruitment processes in the form of CV-sorting software for the purposes of employment, management of workers and access to self-employment,
- Fundamental private and public services such as credit scoring rejecting citizens’ opportunity to obtain a loan,
- Law enforcement which might interfere with people’s fundamental rights such as assessment of the reliability of evidence,
- Migration and border control management such as verifying of authenticity of travel documents,
- Administration of justice and democratic processing such as the application of law to a substantial set of facts.
The law also identifies strict liability for high AI systems before they are put on the market which are specified below;
- Sufficient risk assessment and mitigation systems,
- High quality of the datasets in order to reduce the risks and discriminatory results,
- Activity logging in order to secure the traceability of results,
- Providing detailed documentation in terms of all necessary information on the system and its purpose for authorities to evaluate its compliance,
- Clear and sufficient information to the user,
- Suitable human observance measures in order to minimize the risks,
- high level of robustness, security, and accuracy.
All kinds of remote identification systems are deemed as high risk which accordingly requires strict requirements. The use of remote biometric identification in publicly accessible spaces for the purpose of law enforcement is prohibited.
However, there are narrow exceptions that are very clearly determined such as where necessary to search for a missing child, to prohibit specific terrorist threats, or to detect, locate, identify, or prosecute an offender or suspect of a serious criminal offense.
Limited Risk: This category falls into AI systems having specific transparency requirements. For example, the users using chatbots should be aware that they are interacting with a machine so they can make an informed decision to continue or not.
Minimal or No Risk: Vast majority of AI systems currently being used in the EU fall under the scope of this category such as AI-enabled video games or spam filters.
Upon release of AI systems on the market, the authorities are liable for market surveillance, the users have ensured the human monitoring and the providers also have a post-market monitoring system in place. Both providers and users shall report severe incidents and malfunctioning.
Who is affected by the AI Act?
A wide range of stakeholders such as developers and providers, users and operators, regulators and supervisory authorities, consumers, and citizens will be involved with the processes related to AI.
In this regard, the developers, providers, users, and operators of AI systems are subject to the requirements and obligations set out in the regulation. Within this context, national authorities in the EU will be obliged to enforce the regulation, meanwhile, the European Artificial Intelligence Board provides guidance and support to that authorities. On the other hand, the rights and interests of consumers and citizens in the EU shall be protected by transparent AI systems and informed users about the use of that AI products.
With its adoption and development, the EU AI Act will show us how the industry and the regulators will combine their forces in terms of best achievable outcomes in order to increase the research and industrial resources by ensuring the safety of rights and interests of consumers and citizens in due course.
See Also; https://jurcom.nl/en/the-path-from-enigma-to-chatgpt-artificial-intelligence-and-data-protection-law/