Artificial intelligence has become part of our everyday reality—from personal assistants to algorithms on social media. OpenAI’s ChatGPT tool is gaining popularity, and the company recently launched a new tool called SORA. With its growing influence come questions regarding legal regulation. In this article, we will examine how the legal world is adapting to the new challenges posed by artificial intelligence, whether videos from social media were used to train artificial intelligence, and what steps Italy is currently taking to protect personal data from SORA’s interventions.
SORA is an artificial intelligence program that allows users to generate one-minute-long videos using simple voice commands. SORA is currently available to select visual artists, designers, and filmmakers to gather feedback and further refine the model. The release date for SORA’s availability to all users is not yet known.
In our article on the legal status of artificial intelligence, we discussed the benefits and risks associated with using AI in everyday life. We also analyzed the legislative framework at the time and potential regulatory options. What has changed since then? Are the benefits and risks of AI still relevant? What regulatory conditions apply to the SORA tool?
The OpenAI website provides information about the company’s plans to take important steps to ensure safety, such as developing tools to identify misleading content, as well as a detection tool capable of determining when a video was generated by SORA. The company also states that its text classifier will review and reject text prompts that violate the company’s terms of use. These include prompts that request extreme violence, sexual content, hateful imagery, depictions of celebrities, or the intellectual property of others. But what is the reality regarding legal regulation and the protection of rights?
European Union Regulation
The European Union’s aim is to ensure that artificial intelligence is safe and trustworthy. For this reason, the EU has drafted a new Regulation establishing harmonized rules in the field of artificial intelligence—the AI Act. At the same time, the Regulation represents the first comprehensive legal framework for AI worldwide. On Wednesday, March 13, 2024, the European Parliament approved this Regulation. The Regulation is still subject to a final review by legal experts, must be formally approved by the Council, and will enter into force twenty days after its publication in the Official Journal; it will be fully applicable 24 months after its entry into force, with the exception of a few provisions.
The Regulation classifies AI risks into four categories and subsequently sets out the obligations of distributors prior to making the product available on the market. It is important to understand the new EU regulation to better grasp the possibilities for intervening in SORA’s output. SORA’s ability to generate realistic videos places it in the high-risk category of systems under the Regulation, particularly due to the potential for misuse in creating deepfakes or misinformation. The new legislation may require transparency regarding SORA’s operation, such as an explanation of video generation processes. OpenAI will have to implement strict risk management measures to mitigate potential harm, which may include, among other things, bias detection. The regulation requires that high-risk systems be subject to human oversight aimed at preventing or minimizing risks. Some believe that this oversight could involve human review of generated videos before their release.
Impact Beyond Intellectual Property Rights
OpenAI’s CTO stated that the data used to train the SORA tool consisted of “publicly available data and licensed data,” but could not confirm whether videos publicly available on YouTube or on Meta’s social networks, such as Facebook and Instagram, were also used.
The Italian Data Protection Authority is known for its initiative to ban ChatGPT in Italy due to alleged violations of the law. Access to ChatGPT was later restored. The Authority has now launched an investigation into the services offered by SORA. The Authority is requesting clarification from OpenAI regarding the use of personal data to determine whether the company complies with European legislation and informs both users and non-users about how their data is processed. As part of the investigation, the Authority is requesting that the company provide information on the algorithm used to generate videos, the data collected and used, as well as information on the availability of the service to users in Italy and the EU.
Institutions in EU countries are investigating how artificial intelligence impacts citizens’ rights and ensuring that the EU’s goal of safe and trustworthy AI is met. The EU plans to establish a European Artificial Intelligence Office, which will serve as a center of expertise on artificial intelligence across the EU.