The European Union has intensified its scrutiny of Google, this time focusing on the use of personal data to train artificial intelligence models. The Irish Data Protection Commission (DPC) has opened an investigation to determine whether Google should have conducted a Data Protection Impact Assessment (DPIA) before processing the personal data of European Union users in developing the AI model PaLM 2.
PaLM 2, launched in May 2023, was developed to enhance the virtual assistant Bard, with the goal of competing with ChatGPT. However, it was later surpassed by the Gemini model family. The DPC is investigating whether Google violated the General Data Protection Regulation (GDPR) by not conducting the required impact assessment, a tool designed to identify and mitigate risks in handling personal data.
Article 35 of the GDPR and the absence of a DPIA
Article 35 of the GDPR mandates that companies must conduct a Data Protection Impact Assessment when their activities may pose a high risk to individuals' rights and freedoms. So far, Google has not conducted this assessment for PaLM 2, raising concerns with the DPC. It is known that the AI model was trained using large volumes of data, including texts from websites, books, code, and conversations, which could involve the use of data from EU citizens.
The DPC's investigation aims to clarify whether Google should have implemented measures to protect this data. If an infringement is confirmed, the agency could impose fines of up to 4% of the company's annual global revenue. However, these processes often lead to multiple appeals, so the final results may take some time to emerge.
Google faces a complicated legal landscape
This new legal challenge adds to a difficult week for Google, which recently lost an appeal upholding a €2.4 billion fine for abusing its dominant market position. Despite these setbacks, Google has assured that it will fully cooperate with the Irish authorities and answer all questions regarding the case.