In the previous article, we explained what generally constitutes an artificial intelligence system under the Artificial Intelligence Regulation (hereinafter also referred to as the “AI Act”), as well as the basic categories of entities and the typology of systems distinguished by this regulation. However, many practical questions center on the core of the entire regulation: which artificial intelligence systems will be most affected by the obligations arising from the AI Act, and whether the specific system with which an entity works falls into this category. In this article, we therefore focus on those AI systems that require the introduction of common European rules, with the aim of ensuring a uniform and high level of protection of public interests, particularly in the areas of health, safety, and fundamental rights.
This primarily concerns so-called high-risk AI systems, which may be placed on the EU market or used only if they meet a set of mandatory requirements under the AI Act. The aim of these requirements is to ensure that they do not pose unacceptable risks to public interests recognized and protected by EU law, particularly with regard to the Charter of Fundamental Rights of the European Union, specifically the following rights:
- the right to human dignity,
- respect for private and family life,
- protection of personal data,
- the right to information,
- the right to education,
- consumer protection.
Which AI systems are actually considered high-risk?
The AI Act classifies high-risk AI systems into two main groups, namely:
High-risk systems under Article 6(1) of the AI Act
As stated in Article 6(1) of the AI Act, an AI system is considered high-risk if the following conditions are cumulatively met:
The AI system is intended for use as a safety component of a product covered by the Union harmonization legislation listed in Annex I, or the AI system itself constitutes such a product;
and
The product of which the AI system is a safety component pursuant to point (a), or the AI system itself as a product, must undergo a third-party conformity assessment in order to be placed on the market or put into service in accordance with the Union harmonization legislation listed in Annex I.
In other words, with regard to AI systems that are safety components of products or that are themselves products falling within the scope of certain Union harmonization legislation listed in the Annex to this Regulation, they are classified as high-risk if, for the product in question, a conformity assessment procedure is carried out by a third-party conformity assessment body in accordance with the relevant Union harmonization legislation.
Such products include, in particular:
- Machinery – EU and Council Directive 2006/42/EC;
- Toys – EU and Council Directive 2009/48/EC;
- Recreational craft and personal watercraft – EU and Council Directive 2013/53/EU;
- Lifts – EU and Council Directive 2014/33/EU;
- Equipment for use in potentially explosive atmospheres – EU and Council Directive 2014/34/EU;
- Radio equipment – EU and Council Directive 2014/53/EU;
- Pressure equipment – EU and Council Directive 2014/68/EU;
- Cableway installations – EU and Council Regulation 2016/424;
- Personal protective equipment – EU and Council Regulation 2016/425;
- Gas-fired appliances – EU and Council Regulation 2016/426;
- Medical devices – EU and Council Regulation 2017/745;
- In vitro diagnostic medical devices – EU and Council Regulation 2017/746;
- Civil aviation – EU and Council Regulation 300/2008; EU and Council Regulation 2018/1139;
- Two- or three-wheeled vehicles and quadricycles – EU and Council Regulation 168/2013;
- Agricultural and forestry vehicles – EU and Council Regulation 167/2013;
- Marine vessels – EU and Council Directive 2014/90/EU;
- Railway systems – EU and Council Directive 2016/797;
- Motor vehicles and their components – EU and Council Regulation 2018/858; EU and Council Regulation 2019/2144;
On the other hand, however, even if an AI system is classified as high-risk under the AI Act, this does not automatically mean that the entire product in which this system is implemented is automatically considered high-risk under other regulations. This applies in particular to Regulations (EU) 2017/745 and (EU) 2017/746, where third-party conformity assessment is required for medium- and high-risk products.
High-risk systems listed in Annex III of the AI Act
According to Article 6(2) of the AI Act, not only systems that are safety components of products or products themselves are considered high-risk. Other artificial intelligence systems may also be high-risk if they are listed in Annex III of the AI Act.
These are primarily standalone AI systems if their use in the relevant field can significantly affect the health, safety, or fundamental rights of individuals. When assessing risk, the severity of potential harm, the likelihood of its occurrence, and whether the system is used in multiple specifically defined areas—which the AI Act describes in detail—are taken into account.
Classifying a system as high-risk is the result of a methodology and evaluation criteria that the European Commission plans to apply in future updates to the list of high-risk systems. These changes can be implemented through so-called delegated acts, which allow for a flexible response to rapid technological developments and changing practical applications of artificial intelligence.
However, the AI Act further stipulates in this regard that even if an AI system is listed in Annex III, this does not automatically mean it is considered high-risk. If the system does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons—including that it does not substantially influence the outcome of decision-making—it is not considered high-risk. However, this exception does not apply if the system performs profiling of natural persons.
In this context, it is important to emphasize that the list of AI systems set out in Annex III is not exhaustive. The European Commission is empowered, through delegated acts, to amend this Annex—adding to or modifying the use cases of high-risk systems—provided that the following two conditions are met:
a) the AI systems are intended for use in one of the areas listed in Annex III,
b) the AI systems pose a risk of harm to health, safety, or fundamental rights comparable to or greater than that of systems already listed in the Annex.
To avoid interpretative ambiguities, the AI Act establishes an obligation for the European Commission and the European Council on Artificial Intelligence to develop guidelines. These are to include practical examples of AI systems that are high-risk, as well as those that do not fall into this category.
Areas in which standalone AI systems are considered high-risk include, in particular:
- Biometrics – e.g., remote identification of individuals, emotion recognition, categorization based on sensitive attributes.
- Critical infrastructure – AI systems controlling digital infrastructure, transportation, and energy supply.
- Education and training – systems influencing student admission, assessment, or monitoring.
- Employment and labor relations – recruitment, selection, task assignment, and employee performance monitoring.
- Essential public and private services – e.g., assessing eligibility for benefits, creditworthiness, insurance, emergency calls.
- Law enforcement – e.g., crime prediction, recidivism risk assessment, profiling of individuals, evidence analysis.
- Migration and border control – assessing asylum applications, identifying individuals, evaluating risks associated with entry into the EU.
- Administration of justice and democracy – assisting the judiciary in decision-making, systems influencing election results.
Conclusion
The primary focus of regulatory obligations under the AI Act is so-called high-risk AI systems. These are technologies whose use may significantly affect public interests protected by European Union law, particularly in the areas of fundamental rights, health, and safety.
These systems are subject to specific requirements regarding transparency, risk management, technical documentation, cybersecurity, and conformity assessment. Their classification is based either on their integration into products regulated by EU harmonization legislation (Article 6(1)) or on their socially sensitive use in the areas defined in Annex III (Article 6(2)).
The aim of the AI Act is to strike a balance between technological development and the protection of public interests, with an emphasis on responsibility, reliability, and the ethical use of artificial intelligence.
At the law firm Hronček & Partners, s. r. o., we are actively engaged in the legal regulation of artificial intelligence and the impact of the AI Act on the business environment. We provide comprehensive legal advice on AI regulation, including risk assessments of AI systems, setting up internal processes in compliance with the AI Act, as well as training and raising client awareness of upcoming obligations.
If you are interested in expert consultation or legal assistance in preparing for the AI Act’s entry into force, please do not hesitate to contact us. We are ready to help you navigate the complex regulatory environment and ensure that your technologies comply with current and future requirements of European Union law.