What are high-risk AI systems according to the AI Act and what obligations do they entail?

6.5.2025 | Autor: Mgr. Peter Čičala
5

In the previous article, we explained what an artificial intelligence system generally represents within the meaning of the Artificial Intelligence Act (hereinafter also referred to as the "AI Act"), as well as the basic categories of entities and the typology of systems distinguished by this regulation. However, many practical questions go to the heart of the entire regulation: which AI systems will be most affected by the obligations under the AI Act and whether the specific system with which an entity works falls into this category. This article therefore focuses on those AI systems that require the introduction of common European rules in order to ensure a consistent and high level of protection of public interests, in particular in the areas of health, safety, and fundamental rights.

What are high-risk AI systems according to the AI Act and what obligations do they entail?

 

This mainly concerns so-called high-risk AI systems, which may only be placed on the EU market or used if they comply with a set of mandatory requirements under the AI Act. The aim of these requirements is to ensure that they do not pose unacceptable risks to public interests recognized and protected by EU law, in particular taking into account the Charter of Fundamental Rights of the European Union, namely the following rights:

  • the right to human dignity,
  • respect for private and family life,
  • the protection of personal data,
  • the right to information,
  • the right to education,
  • consumer protection.

Which AI systems are actually considered high-risk?

The AI Act divides high-risk AI systems into two main groups, namely:

High-risk systems according to Article 6 of the AI Act

As follows from paragraph 1 of Article 6 of the AI Act, an AI system is considered high-risk if the following conditions are cumulatively met:

The AI system is intended to be used as a safety component of a product covered by Union harmonization legislation listed in Annex I, or the AI system itself is such a product; and

a

The product of which the AI system is a safety component referred to in point (a), or the AI system itself as a product, must undergo a third-party conformity assessment in order to be placed on the market or put into service in accordance with the Union harmonization legislation listed in Annex I.

In other words, AI systems that are safety components of products or that are themselves products falling within the scope of certain Union harmonization legislation listed in the Annex to this Regulation they shall be classified as high risk if the conformity assessment procedure for the product concerned is carried out by a third-party conformity assessment body in accordance with the relevant Union harmonization legislation.

Such products include, in particular:

  • Machinery – EU and Council Directive 2006/42/EC;
  • Toys – EU Council Directive 2009/48/EC;
  • Recreational craft and personal watercraft – EU Council Directive 2013/53/EU;
  • Lifts – EU Council Directive 2014/33/EU;
  • Equipment for use in potentially explosive atmospheres – EU and Council Directive 2014/34/EU;
  • Radio equipment – EU and Council Directive 2014/53/EU;
  • Pressure equipment – EU and Council Directive 2014/68/EU;
  • Cableway installations – EU Council Regulation 2016/424;
  • Personal protective equipment – EU Council Regulation 2016/425;
  • Appliances burning gaseous fuels – EU Council Regulation 2016/426;
  • Medical devices – EU and Council Regulation 2017/745;
  • In vitro diagnostic medical devices – EU and Council Regulation 2017/746;
  • Civil aviation – EU and Council Regulation 300/2008; EU and Council Regulation 2018/1139;
  • Two- or three-wheel vehicles and quadricycles – EU and Council Regulation 168/2013;
  • Agricultural and forestry vehicles – EU Council Regulation 167/2013;
  • Marine vessels – EU Council Directive 2014/90/EU;
  • Railway systems – EU and Council Directive 2016/797;
  • Motor vehicles and their components – EU and Council Regulation 2018/858; EU and Council Regulation 2019/2144;

On the other hand, even if an AI system is classified as high risk under the AI Act, this does not automatically mean that the entire product in which the system is implemented is automatically considered high risk under other regulations. This applies in particular to Regulations (EU) 2017/745 and (EU) 2017/746, which require third-party conformity assessment for medium- and high-risk products.

High-risk systems listed in Annex III to the AI Act

According to Article 6(2) of the AI Act, not only systems that are safety components of products or products themselves are considered high-risk. Other AI systems may also be high-risk if they are listed in Annex III to the AI Act.

These are mainly stand-alone AI systems where their use in a particular area may significantly affect the health, safety or fundamental rights of persons. The risk assessment takes into account the severity of the potential harm, the likelihood of its occurrence, and whether the system is used in several specific areas described in detail in the AI Act.

The classification of a system as high-risk is the result of a methodology and assessment criteria that the European Commission plans to apply in future updates of the list of high-risk systems.

These changes can be made through so-called delegated acts, which allow for a flexible response to rapid technological developments and changing ways of using AI in practice. However, the AI Act further stipulates that even if an AI system is listed in Annex III, this does not automatically mean that it is considered high-risk.

If the system does not pose a significant risk to the health, safety or fundamental rights of natural persons – including by not substantially affecting the outcome of a decision – it is not considered high-risk. However, this exemption does not apply if the system performs profiling of natural persons.

In this context, it is important to emphasize that the list of AI systems in Annex III is not exhaustive. The European Commission is empowered to amend this Annex by means of delegated acts to add or modify cases of use of high-risk systems if the following two conditions are met:

(a) the AI systems are intended for use in one of the areas listed in Annex III;

(b) the AI systems pose a risk of harm to health, safety or fundamental rights comparable to or greater than that posed by the systems already listed in the Annex.

To avoid any ambiguity of interpretation, the AI Act requires the European Commission and the European AI Council to develop guidelines. These guidelines shall include practical examples of AI systems that are high-risk and those that are not.

Areas in which stand-alone AI systems are considered high-risk include, in particular:

  • Biometrics – e.g., remote identification of individuals, emotion recognition, categorization based on sensitive attributes.
  • Critical infrastructure – AI systems controlling digital infrastructure, transportation, energy supply.
  • Education and training – systems that influence the admission, assessment or monitoring of students.
  • Employment and labor relations – recruitment, selection, task allocation, monitoring of employee performance.
  • Essential public and private services – e.g. assessing eligibility for benefits, creditworthiness, insurance, emergency calls.
  • Law enforcement – e.g. crime prediction, recidivism risk assessment, profiling, evidence analysis.
  • Migration and border control – assessing asylum applications, identifying individuals, assessing the risks of entry into the EU.
  • Justice and democracy – assisting the judiciary in decision-making, systems influencing election results.

Conclusion

The main subject of regulatory obligations under the AI Act are so-called high-risk AI systems. These are technologies whose use may significantly affect public interests protected by European Union law, in particular in the areas of fundamental rights, health, and security.

These systems are subject to specific requirements in terms of transparency, risk management, technical documentation, cybersecurity, and conformity assessment. Their classification is based either on their integration into products regulated by EU harmonization legislation (Article 6(1)) or on their socially sensitive use in the areas defined in Annex III (Article 6(2)).

The aim of the AI Act is to strike a balance between technological development and the protection of public interests, with an emphasis on responsibility, reliability, and ethical use of artificial intelligence.

We are actively involved in the legal regulation of artificial intelligence and the impact of the AI Act on the business environment at the law firm Hronček & Partners, s. r. o.. We provide comprehensive legal advice on the regulation of artificial intelligence, including risk assessment of AI systems, setting up internal processes in accordance with the AI Act, as well as training and raising client awareness of upcoming obligations.

If you are interested in expert consultation or legal assistance in preparing for the AI Act, please do not hesitate to contact us. We are ready to help you navigate the complex regulatory environment and ensure that your technologies comply with current and future European Union law requirements.


Mgr. Peter Čičala

Mgr. Peter Čičala

"High-quality content is not created by copywriters, but by experts."