June 8, 2024 — Meta Platforms Inc. is under scrutiny in the European Union, facing eleven complaints from various consumer organizations over its data handling practices for AI model training. The complaints accuse Meta of breaching the General Data Protection Regulation (GDPR) by not obtaining proper consent before collecting large volumes of user data from Facebook and Instagram
The European Consumer Organisation (BEUC) and other groups claim that Meta’s approach to offering users a “pay-or-consent” model—either paying for an ad-free experience or consenting to data collection for targeted advertising—violates GDPR principles of lawfulness, fairness, and transparency. They describe this model as coercive, suggesting it pressures users into consenting to extensive data collection
Meta, however, argues that its practices are compliant with EU regulations. The company has stated that it notifies users in advance about data usage and provides options to opt out of data processing. According to Meta, only public posts from users over 18 years old are used for AI training, and users have a four-week notice period before their data is utilized
The Irish Data Protection Commission (DPC) has been involved, confirming that Meta delayed the launch of certain data practices to address the DPC’s inquiries. Meta assures that it reviews all opt-out requests and applies them in accordance with relevant data protection laws
This controversy comes amid Meta’s aggressive investment in AI research and development. The company’s AI initiatives include its language model, Llama, which supports various AI-driven services. Despite regulatory hurdles, Meta continues to prioritize the integration of AI into its services, leveraging user data to improve these technologies
As the situation develops, Meta could face significant fines if found in violation of GDPR, which allows penalties of up to 4% of a company’s annual global revenue for the most serious breaches.