Artificial intelligence has become part of our everyday reality – from personal assistants to algorithms on social media. OpenAI's ChatGPT tool is gaining popularity, and the company recently launched a new tool called SORA. With growing influence come questions about legal regulation. In this article, we explore how the legal world is adapting to the new challenges of artificial intelligence, whether videos from social networks have been used to train artificial intelligence, and what steps Italy is currently taking to protect personal data from SORA's interference.

SORA is an artificial intelligence program that allows users to generate one-minute videos using simple verbal commands. SORA is currently available to some visual artists, designers, and filmmakers to gather feedback and improve the model. The date when SORA will be available to all users is not yet known.
In our article on the legal status of artificial intelligence, we discussed the advantages and risks associated with the use of AI in everyday life. We also analyzed the legislative framework at the time and the possibilities for regulation. What has changed since then? Are the advantages and risks of AI still relevant? What regulatory conditions apply to SORA?
The OpenAI website provides information about the company's plans to take important steps to ensure safety, such as the creation of tools to recognize misleading content and a detection tool that can determine when a video was generated by SORA. The company also states that its text classifier will check and reject text prompts that violate the company's rules of use. These are prompts that request extreme violence, sexual content, hateful depictions, or images of celebrities or intellectual property of others. But what about legal regulation and rights protection?
European Union regulation
The European Union's intention is to ensure that artificial intelligence is safe and trustworthy. For this reason, the EU has prepared a new Regulation laying down harmonized rules in the field of artificial intelligence - the AI Act. At the same time, the Regulation represents the first comprehensive legal framework for AI worldwide. On Wednesday, March 13, 2024, the European Parliament approved this Regulation. The Regulation is still subject to final review by lawyers, must be formally approved by the Council, and will enter into force 20 days after its publication in the Official Journal and will be fully applicable 24 months after its entry into force, with the exception of a few provisions.
The Regulation divides the risks of artificial intelligence into four groups and subsequently determines the obligations of distributors prior to placing them on the market. It is important to understand the new EU regulation in order to better grasp the possibilities for intervention in the results of the SORA tool. SORA's ability to generate realistic videos places it in the high-risk category of systems under the Regulation, mainly due to its potential misuse for creating deepfakes or misinformation. The new legislation may require transparency in SORA's functioning, such as an explanation of the video generation processes. OpenAI will have to implement strict risk management measures to mitigate potential harm, which may include, among other things, bias detection. The regulation requires that high-risk systems be subject to human oversight focused on preventing or minimizing risks. Some believe that this oversight could involve human control of the videos created before they are published.
Interference not only with intellectual property rights
OpenAI's CTO stated that the data used to train the SORA tool was "publicly available data and licensed data," but could not confirm whether videos publicly available on YouTube or Meta's social networks, such as Facebook and Instagram, were also used.
The Italian Data Protection Authority is known for its initiative to ban ChatGPT in Italy due to alleged violations of legislation. Access to ChatGPT was later restored. The Authority has now launched an investigation into the services offered by SORA. The Authority is seeking clarification from OpenAI regarding its use of personal data to determine whether the company complies with European legislation and informs users and non-users about how their data is processed. As part of the investigation, the Authority is requesting information from the company about the algorithm used to generate the videos, the data collected and used, and the availability of the service to users in Italy and the EU.
Institutions in EU countries are investigating the impact of artificial intelligence on citizens' rights and ensuring that the EU's goal of safe and trustworthy AI is achieved. The EU plans to establish a European AI Agency, which will be a center of expertise on artificial intelligence across the EU.