Thu, Jan 02 2025
The U.S.-based artificial intelligence startup OpenAI has been fined €15 million by Italy's data protection body, Garante, for its ChatGPT service's noncompliance with legal requirements on the management of personal data.
The Associated Press reports that the authority's examination found that OpenAI violated the transparency standards set out by EU privacy regulations and had a solid legal basis for processing user data.
Garante emphasized that OpenAI violated users' right to privacy by using a large amount of personal data to train the ChatGPT algorithm without adequately notifying users. Referencing a previous interim suspension of ChatGPT in Italy in 2023 that was subsequently removed following compliance changes by OpenAI, the watchdog maintains its judgment despite OpenAI calling the punishment "disproportionate" and announced intentions to appeal.
The research also revealed that there was no reliable age verification system in place to stop children under the age of 13 from viewing potentially offensive material produced by ChatGPT. Garante has responded by ordering OpenAI to launch a six-month educational campaign on many Italian media outlets to raise public awareness of ChatGPT's data gathering methods.
This lawsuit is a component of larger global regulatory initiatives, especially in the United States and Europe, where regulators are closely examining AI technology. The goal of the current discussions and legislative actions, such as the EU's extensive AI Act, is to reduce the dangers connected with AI systems and guarantee that they adhere to user privacy and data protection regulations.
Leave a Comment