Italy’s privacy watchdog, Garante, announced on Friday that it fined OpenAI €15 million ($15.59 million) following an investigation into the company’s use of personal data by the ChatGPT application.
In a statement, Garante said that OpenAI “processed users’ personal data to train ChatGPT without first identifying an adequate legal basis and violated the principle of transparency and the related information obligations towards users.”
OpenAI also failed to notify the authority of the data breach it suffered in March 2023. Italy’s Garante added that OpenAI did not provide mechanisms for age verification, with the consequent risk of exposing minors under 13 to responses that are unsuitable for their level of development and self-awareness.
OpenAI ordered to carry out a communication campaign
In a bid to ensure effective transparency when processing personal data, Italy’s privacy watchdog ordered OpenAI to carry out “a 6-month institutional communication campaign on radio, television, newspapers and the internet.” The contents of the campaign, which Garante will have to agree on, must promote public understanding and awareness of the functions of ChatGPT, in particular, when it comes to the collection of data from users and non-users for the training of generative artificial intelligence.
“ChatGPT users and non-users should be made aware of how to oppose the training of generative artificial intelligence with their personal data and, therefore, be effectively placed in the position to exercise their rights under the General Data Protection Regulation (GDPR),” the statement added.
OpenAI has not commented on the fines yet. However, it previously said it believes its practices are in line with the European Union’s privacy laws.
Read| Exploring Google’s new reasoning AI: What is Gemini 2.0 Flash Thinking?
Italy blocks ChatGPT
Last year, Italy became the first Western country to block OpenAI’s ChatGPT over privacy concerns. The watchdog said that the app had experienced a data breach involving user conversations and payment information. It added that there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.
At the time, Italy’s privacy watchdog said OpenAI had 20 days to say how it would address these concerns, under penalty of a fine of €20 million or up to 4 percent of annual revenues. Garante reduced the original fine in its latest verdict, stating that it took into account the company’s collaborative attitude.
OpenAI reactivated its service in Italy after it addressed several issues including the right of users to refuse consent for the use of personal data to train algorithms.