Disinformation in the digital space: How is the European Union strengthening internet security?

2.5.2025 | Autor: Róbert Hronček
6

The misuse of disinformation narratives by some political parties poses a serious threat to democratic competition.

Disinformation in the digital space: How is the European Union strengthening internet security?

 

The spread of disinformation in Slovakia remains a significant problem that affects public opinion, media freedom, and political dynamics.

Surveys show that a significant part of the population is susceptible to conspiracy theories. Top political leaders themselves are involved in "alternative" media outlets known for spreading disinformation and conspiracy theories, raising concerns about media freedom, the integrity of information, and democratic processes.

High levels of polarization, a lack of digital literacy, and the consumption of fast-moving information without checking its accuracy or context are fertile ground for the uncontrolled spread of disinformation online.

Solutions require a multifaceted approach, including legislative measures, media literacy initiatives, and support for independent journalism to protect democratic integrity.

The European Union is seeking to protect democratic and electoral processes and civic debate through a number of acts regulating the functioning of digital technologies in the EU, including the Digital Services Act (DSA).

The aim of the DSA is to limit the unchecked power of large digital platforms by requiring them to assess and mitigate the risks associated with their influence on protected interests. This is an important safeguard against disinformation, although its enforcement may be challenging and cause transatlantic tensions.

Online platforms providing services to European users must systematically identify, assess, and mitigate risks related to content, ranging from hate speech to disinformation and election interference. Operators face potentially high fines of up to six percent of their global turnover.

However, it is not appropriate to label European rules on expression as censorship. No provision of the DSA requires platforms to remove lawful content. It obliges them to detect and counter systemic manipulation tactics, in particular during elections.

Companies are not required to block all user expressions before they are posted, only to take measures to minimize illegal content and remove it when it is identified as illegal. At the same time, the DSA discourages excessive removal and requires platforms to publish transparency reports on removal requests, justify their decisions, and offer users appeal mechanisms.

The DSA's risk assessment and management obligations were expected to compel platforms to identify threats in the digital space and adopt meaningful reforms. However, the risk assessments published under the DSA in November 2024 focus too narrowly on election integrity, neglecting the broader risks of disinformation to democracy.

For democracy to function, civic debate must be inclusive, pluralistic, and accessible, must recognize and respect differences in opinion and socio-political differences, must focus on facts and information, and must create space for citizen engagement and representation in decision-making processes.

This is particularly critical in electoral processes, but solutions must be comprehensive and platforms cannot focus too narrowly on the issue of election interference.

If the digital public space is plagued by disinformation, polarization, and suppression of legitimate expression over the long term, then measures taken to ensure the integrity of the electoral process itself are merely reactive and insufficient. The recent approval of codes of conduct and their incorporation into the DSA could also help.

Although compliance is voluntary, signatories commit to respecting the commitments by signing. For signatories designated as very large online platforms (VLOPs) and very large online search engines (VLOES), this can help ensure that appropriate measures are put in place to mitigate risks.

Code of Conduct on Countering Online Hate Speech +signed by 12 signatories, seven of which are designated VLOPs (Facebook, Instagram, LinkedIn, Snapchat, TikTok, X and YouTube), and five other signatories (Dailymotion, Jeuxvideo.com, Microsoft, Rakuten Viber, Twitch). The Commission and the European Digital Services Board approved its incorporation into the DSA framework on January 20, 2025.

The Code of Practice on Disinformation is a unique framework agreed by representatives of online platforms, leading technology companies and advertising industry players, fact-checkers, researchers and civil society organizations to combat disinformation on a voluntary and self-regulatory basis.

Adopted by signatories in 2018 and subsequently revised and strengthened in 2022 following guidance from the European Commission, the Code currently has more than 40 signatories, including major VLOPs and VLOSEs – Google Search & YouTube (Google), Instagram and Facebook (Meta), Bing and LinkedIn (Microsoft) and TikTok.

On February 13, 2025, the European Digital Services Board and the Commission assessed the Code of Practice on Disinformation and concluded that it meets the conditions for codes of conduct under the DSA, with the proviso that, if fully implemented, its strict commitments and detailed measures together constitute a strong set of mitigating measures and can serve as a relevant criterion for determining a platform's compliance with the DSA.

The conversion will take effect on July 1, 2025. With this incorporation, the Code of Practice will serve as a relevant criterion for determining compliance with the DSA.

The value of these commitments lies in the fact that they are the result of an agreement between a broad group of entities, based on existing best practices in the industry. The Code of Practice on Disinformation contains 44 commitments and 128 specific measures, such as:

  • demonetisation of information – limiting financial incentives for disinformation disseminators;
  • transparency of political advertising and more effective labeling so that users can recognize political advertising;
  • ensuring the integrity of services, including limiting fake accounts, bot amplification and other manipulative behavior used to spread disinformation;
  • empowering users, researchers and the fact-checking community to better identify disinformation, broader access to data and coverage of fact-checking across the EU.

The incorporation of the Code of Practice + and the Code of Practice on Disinformation into the DSA represents a significant development in the EU's regulatory approach to online content management. At the same time, it is concerning that some signatories have withdrawn from the chapter on fact-checking.

However, the European Commission stresses that it considers the code to be sufficient only if it is fully implemented. The transition from voluntary commitments to enforceable obligations can ensure that digital platforms remain responsible for their content moderation policies while promoting a more transparent and secure online environment and reducing the risk of manipulation of public opinion and the spread of disinformation.

Furthermore, this integration serves as a model for regulatory frameworks around the world and shows how binding obligations can be combined with industry cooperation to mitigate harmful content while preserving freedom of expression.

At the same time, I am in favor of introducing stricter regulation and more consistent sanctions not only for platforms but also for the dissemination of disinformation itself, as those who spread it often profit from this activity, either through advertising revenue or by benefiting various interest groups.

The misuse of disinformation narratives by some political parties also poses a serious threat to democratic competition. In the long term, however, the key solution is to systematically educate the public in media literacy, critical thinking, and fact-checking.

A prerequisite for this is a genuine interest on the part of government officials in ensuring that citizens have access to this type of education and in rejecting the use of disinformation as a tool of political manipulation.


Róbert Hronček

Róbert Hronček

JUDr. Róbert Hronček is the founder and managing partner of the law firm Hronček & Partners. He specializes in commercial law, regulation, compliance, and legal aspects of doing business in dynamically changing industries. Thanks to his extensive experience, he provides strategic advice to companies of all sizes, from innovative start-ups to established firms and corporations. As a visionary leader of the law firm, he actively shapes the future of legal services through innovation, a modern approach to consulting, and the digitization of legal processes. He focuses on creating valuable partnerships that bring legal certainty and comprehensive services to clients. In addition to his legal practice, he is an active venture capital investor, supporting the growth and development of promising technology and innovative companies. His expert commentary reflects not only legislative changes but also broader economic and technological trends shaping the business environment.