ZDF and the AI trap: Two mistakes, one mistake—and suddenly credibility is at stake

Published on: February 20, 2026Categories: Legal, Tech & E-CommerceReading time: 3 min.
Avatar photo
Hakan Tok writes articles on technical topics in the blog Recht 24/7 Love & Law.

Image: 1take1shot / Shutterstock.com

Sometimes a single television report is enough to put an entire editorial team in a difficult position. That's exactly what happened to ZDF: during a report on the US immigration authority ICE on the "heute journal" news program, images appeared that should not have been broadcast. And because it wasn't just one mistake, presenter and deputy editor-in-chief Anne Gellinek later spoke of a "double mistake" and publicly apologized. In the world of news, this is quite a minor earthquake—because apologies are not made for trivial matters, but for things that undermine trust.

Why images are so dangerous

Texts can be corrected relatively quickly. Images, on the other hand, stick in people's minds. When viewers see people being led away by emergency services—possibly even with children in the picture—it seems like definitive proof. That is precisely why images in news programs are so sensitive: they often determine what the audience believes in a matter of seconds. If this material is inaccurate or misrepresented, information becomes illusion.

Mistake number 1: AI material – and the label was gone

ZDF initially stated that there had been AI-generated images that should have been recognizable as such. However, the marking had not been transferred correctly during the technical process. For viewers, this means that they see something that looks like reality without any indication that it was artificially created. This is not simply "unattractive," but violates the basic contract between news outlets and their audience: "You can trust what you see."

Mistake number 2: Real video, wrong story

Later, the broadcaster also admitted that the report included footage that was not relevant to the current topic. In other words, material from a different context and from an earlier time was used in the report to make it appear as if it were a current event. Even if such images are "real," the effect is devastating if the context is wrong: what is shown is not reality, but a narrative that just happens to be embellished with suitable images.

ZDF takes a hard line—at least on paper

After the incident, the broadcaster made it clear that AI images of people or political events have no place in the news section—unless the report is specifically about AI fakes. And even then, a small "AI" label is often not enough, because viewers respond emotionally to images and easily overlook warnings.

Political repercussions and the bitter core

Minister of State for Culture Weimer described the incident as unpleasant and warned that such things undermine credibility. At the same time, he praised ZDF for responding and apologizing. However, an apology does not fix a system. The crucial question is whether controls are really effective before something is broadcast.

The critical point: Here, it doesn't seem like a single slip-up, but rather a glimpse into the engine room. When AI material slips through and old material is misplaced, it's not just a case of human inattention—it means there's a problem with the safety belt in the process. And when news outlets start working with "almost matching" images, they are dangerously close to what they otherwise criticize on social media.

 

Source: deutschlandfunk.de

Avoid similar mistakes and protect your credibility! Book a consultation with our media law experts now.

At a fixed price of 169 EURO (gross)