Meta, the parent company of Facebook and Instagram, has updated its privacy policy effective June 26 to allow the use of public posts, images and personal data to train its artificial intelligence (AI). This change has sparked considerable criticism from privacy advocates and led to European lawsuits.

The new policy allows Meta to use user data without explicit consent and justifies this on the basis of "legitimate interest" under the General Data Protection Regulation (GDPR). This includes posts, comments and interactions with companies, but excludes private messages. Users can opt out by submitting a request explaining how the use of the data affects them. Meta will consider this request in accordance with data protection laws.

The data protection group NOYB, founded by Max Schrems, has filed complaints in 11 European countries. They argue that Meta's use of the "legitimate interest" clause to process user data without consent is problematic and potentially in breach of the General Data Protection Regulation. This comes amid ongoing concerns about how personal data can be used for AI training without jeopardizing users' privacy.

 

Impact

The updated policy has prompted some users, particularly artists, to leave platforms such as Instagram to avoid the use of their data for AI development, fearing that this could ultimately compete with their work. This has raised broader questions about the ethics of using personal and creative content to train AI models.

In the past, Meta has had to pay hefty fines of over €1.5 billion for breaches of the General Data Protection Regulation (GDPR). The company's invocation of a "legitimate interest" in data processing has been challenged in contexts such as advertising. The ongoing legal and regulatory scrutiny in Europe could impact how big tech companies handle user data and AI integration in the future.