Artificial intelligence (AI) is not entirely new, but its application and scope have changed dramatically in recent years, expanding to become one of the most important technological innovations and an integral part of everyday life, partly as a result of significant advances in algorithms, computing power, and data availability have enabled results that were previously considered unimaginable. We feel its impact in various areas of our lives, from the use of AI assistants to more complex applications such as disease diagnosis or predictive models in financial markets. However, the rapid development of AI brings with it a host of risks and questions related to how it is used, but especially in relation to its legal regulation. In response to this situation, the European Union has adopted the Artificial Intelligence Act, which is the first piece of legislation of its kind.

Reasons for adoption and purpose of the Artificial Intelligence Regulation
The adoption of legislation uniformly regulating the use of artificial intelligence was a matter of time. Although existing legislation provides a certain degree of legal protection, it is not sufficient to address the challenges that AI may bring and is already bringing. The purpose of the Regulation is therefore to lay down uniform rules on the development, placing on the market, and use of AI systems in order to ensure that they comply with European Union law. Through uniform legislation for Member States, the European Union wants to protect EU citizens from the potential negative effects of artificial intelligence, ensure the protection of their fundamental rights, and promote the deployment of safe and trustworthy systems throughout the EU single market. At the same time, the regulation aims to increase awareness of the use of artificial intelligence and promote its development as a prerequisite for improving economic growth, raising living standards, and so on.
Other reasons for adoption include issues related to liability for damage that artificial intelligence may cause, specifically the extent to which the persons responsible are liable in the event of damage. At the same time, it was also necessary to take into account the risk that AI could, for example, lead to further violations of competition rules, particularly by companies that have access to large amounts of data that they can use to eliminate their competitors.
What is an AI system and how is it defined in the AI Regulation?
The AI Regulation, in Article 3, defines an AI system as: "a machine system which is designed to operate with varying degrees of autonomy, which, when deployed, may demonstrate adaptability and which, for explicit or implicit objectives, derives from the inputs it receives a means of generating outputs such as predictions, content, recommendations or decisions that may affect the physical or virtual environment."
In other words, it is a technological system designed to perform tasks autonomously, i.e. the ability to act without human intervention, adapt and learn based on the information it receives and from which it derives various outputs such as predictions, recommendations, or decisions that can affect the real world (as in the case of autonomous vehicles) or the virtual environment (as in the case of online recommendations in stores). An AI system can therefore use abilities similar to those of humans (learning, thinking, creating new things, and so on).
We encounter artificial intelligence in everyday things such as searching for things on the internet, personalized shopping recommendations, smart homes, and translators, and we often don't even realize it. However, we also find its application in the public sector, for example in autonomous trains, in healthcare in the discovery of new medical discoveries or in the diagnosis of diseases, as mentioned at the beginning of this article. Modern vehicles also commonly use AI in functions to increase safety or in navigation.
Which entities are affected by the Artificial Intelligence Regulation?
The Artificial Intelligence Regulation applies to public and private entities within and outside the EU if the AI system is placed on the EU market or if its use has an impact on persons located in the EU. The Regulation specifies them in more detail in Article 2 as:
- suppliers,
- entities deploying AI systems,
- importers,
- distributors of AI systems,
- manufacturers and data subjects located in the EU.
Perhaps the most important entity regulated by the Regulation is the supplier. A provider is defined as a natural or legal person, public authority, or other entity that develops or has developed an AI system with the intention of placing it on the market or putting it into service under its own name or trademark, either for payment or free of charge.
A user, within the meaning of the Artificial Intelligence Regulation, is any natural or legal person, public authority, or other entity that uses an AI system within the scope of its competence, except when the AI system is used in the course of personal non-professional activities.
A distributor is a natural or legal person in the supply chain (other than the provider or importer) who makes an AI system available on the EU market
Assessment of the risk posed by AI systems - 4 levels of risk for AI systems
The Regulation introduces an approach based on the level of risk that individual AI systems pose to society. Depending on the assessment of the level of risk, different obligations apply to AI systems. The risk levels are divided into the following categories:
- Unacceptable risk (Chapter II of the Regulation)
- High risk (Chapter III of the Regulation)
- Specific risk related to transparency (Chapter IV of the Regulation)
- Minimal risk
Under the AI Regulation, unacceptable risk is considered a clear threat to the safety, life, and rights of persons and is therefore prohibited. Unacceptable risk is posed by AI practices such as: harmful manipulation and deception based on artificial intelligence, harmful exploitation of vulnerabilities based on artificial intelligence, assessment or prediction of the risk of individual criminal acts, or recognition of emotions in workplaces and educational institutions. The Act also prohibits the use of artificial intelligence for predictive policing based on profiling. Prohibited practices are defined in more detail in Article 5 of the Regulation.
High-risk systems are subject to the strictest obligations in relation to their operation, use, development and placing on the market. The use of AI systems is considered high risk when:
- AI security components in critical infrastructure (e.g. in transport) whose failure could endanger the life and health of citizens;
- AI solutions used in educational institutions that may determine access to education and the course of working life (e.g., exam scoring)
- cases of AI use in law enforcement that may interfere with fundamental rights (e.g., assessment of the reliability of evidence), and similar.
Other AI systems that can be considered high-risk are based on Annex III of the AI Regulation, which specifies areas where the use of AI systems may pose a high risk of harm or bring about an undesirable state.
Artificial intelligence systems with limited risk are not subject to the same strict obligations as high-risk systems, but they are still subject to certain regulatory requirements set out in the Artificial Intelligence Regulation.
What obligations will the entities concerned have to fulfill?
Depending mainly on the status of the entity concerned, the riskiness of the AI system, and the area in which the AI system is used, the AI Regulation sets out a wide range of obligations, ranging from informing employees or employee representatives until the introduction of a quality management system, undergoing conformity assessment, obtaining an EU declaration of conformity, and fulfilling registration obligations. employee representatives, to the implementation of a quality management system, conformity assessment, obtaining an EU declaration of conformity, fulfilling registration obligations, and others, which we will discuss in detail in subsequent articles.
Entry into force of the Regulation
The AI Act is effective from August 1, 2024, and will become fully effective on August 2, 2026, with certain exceptions:
- from February 2, 2025, provisions on obligations and prohibitions related to AI literacy will be effective,
- from August 2, 2025, rules applicable to general-purpose AI models will be effective,
- from August 2, 2027, rules for high-risk AI systems embedded in regulated products will take effect.
Conclusion
The AI Regulation is the first major step towards the legal regulation of AI in the European Union. As a pioneering legislative instrument, however, it contains a number of vague and general provisions, which is natural given that this is a legal area in its early stages of development.
For this reason, it can be expected that the adoption of the Regulation alone will not be sufficient to cover all practical and legal issues arising from the use of artificial intelligence. It will be necessary to gradually supplement the legal framework through further secondary legislation, guidelines, delegated acts, and through the interpretation of legal norms by the relevant judicial institutions.
The case law of the Court of Justice of the European Union will be of fundamental importance in this regard, as its interpretation of the provisions of the Regulation will contribute to their clarification, the establishment of consistent application practices, and the removal of interpretative ambiguities.
At our law firm, Hronček & Partners, s. r. o., we actively monitor developments in legal regulation in the field of artificial intelligence so that we can provide our clients with high-quality, up-to-date and comprehensive legal advice in this dynamically developing area.
In order to ensure a comprehensive range of services, we have also entered into partnerships with foreign certification bodies, thanks to which we can provide our clients with assistance throughout the entire process – from legal analysis and consultation to obtaining a certificate of compliance with the requirements of the Artificial Intelligence Regulation.