AI training: Court puts the brakes on panic - and gives Meta the green light for public posts
Associations against Big Tech: wave of lawsuits across Europe
From Amsterdam to Zagreb, consumer associations and data protection NGOs are firing back against "Big Tech". Big names such as Somi (NL), Noyb (Vienna) and the ICCL (Ireland) are at the starting line. Their tool: the representative action directive (EU) 2020/1828, which allows qualified bodies to claim damages on behalf of consumers - across borders. Sounds powerful, but there's a catch: Europe lacks a uniform compass for civil proceedings. Unlike the GDPR for supervisory authorities, there is no real "consistency mechanism" for courts. Result: forum shopping, different standards, a lot of uncertainty - especially when it comes to new topics such as AI training.
The spark: Meta's plan for public content
Meta wants to use publicly visible posts from Facebook and Instagram to train its AI models. Justification: legitimate interest; risks are technically limited and an opt-out is possible at any time. Associations are critical of this: business objectives are being placed above the rights of users. At the same time, summary proceedings are underway - for example before the Higher Regional Court of Schleswig. In Belgium, Somi is demanding a flat rate of 350 euros per adult and up to 2,000 euros for children under 13 - sums that can quickly run into the tens of millions.
Copyright law is also adding fuel to the fire: creatives and collecting societies are demanding licenses. The FAZ moved the topic to the business page, author Nina George speaks of "creative robbery" and estimates the value of a novel at 60,000 euros. Publishers are warning of dwindling royalties from AI songs - and want a share of subsequent sales. A separate conflict, but one that increases the overall pressure on the platforms.
The signal from Cologne: three guard rails instead of bans
A decision by the Higher Regional Court of Cologne (23.05.2025, ref. 15 UKl. 2/25) bursts into this climate. The NRW consumer advice center wanted to prohibit Meta's AI training - the Senate rejected this and allowed the use of publicly visible content by adult users. The reasoning is sober and sets out three key points:
Legitimate interest: Meta's goal of building regionally appropriate, better language models is "topical, concrete and not speculative". Economic interests can be justified - provided there are no milder, equally effective means. Because the data is public and a simple objection is possible, user interests do not prevail.
No DMA break: The content is broken down into tokens and fed into an unstructured data set. There is no profile-creating merging across platform boundaries. This means that the core element of the prohibition from the Digital Markets Act is missing: the linking of two identifiable profiles.
Sensitive data (Art. 9 GDPR): The general ban on processing is understood narrowly. Following on from ECJ guidelines on search engines, the following applies: intervene above all if data subjects request deletion. A total stop to training would block European AI research. Those who make content public themselves are less in need of protection; opt-out removes contributions.
Economic consequences: Plannability instead of standstill
The decision is in line with the Irish data protection authority's previous line: After improvements to de-identification, transparency and opt-out, the DPC 2024 did not envisage a ban, but required a progress report for October 2025. Court and supervision are thus running in parallel. For companies, this means that de-identification, easily accessible opt-outs and no profile mixing are key. Those who implement this properly will be given leeway - without leaving those affected without rights.
The Cologne line also puts the brakes on forum shopping. If other courts adopt similar criteria, the tactical advantage of class actions will diminish. Plannable rules strengthen the location: Investments in AI happen where the rules of the game are clear.
Open construction sites: Copyright and Brussels
The issue is not completely over. Creative professionals continue to insist on remuneration, the NRW consumer advice center is considering a main action. An ECJ referral on Art. 9 GDPR is up in the air - a ruling would take years. Guidelines on the AI Regulation are still expected from Brussels in 2025. De-identification plus opt-out could become the standard. If politicians confirm this course, Cologne will serve as a blueprint for Europe.
The Higher Regional Court of Cologne sets a pragmatic triad: public data yes - de-identified, with opt-out, without profile gluing. This protects fundamental rights and allows for innovation.
Good decision - but not a free ride
Anyone who posts publicly should not be surprised that machines are also reading. But: opting out must be as simple as a "like" - and creators need real deals instead of PR-speak. Associations that tour Europe with maximum demands risk side effects: less investment, less speed. Rules must be clear, protection strong, innovation allowed - otherwise Europe will be watching the future of AI from the stands.
Do you want to be legally protected when using AI? Let our experts advise you. Book your legal advice now!