When one of Europe’s most prominent editors is suspended for publishing AI‑fabricated quotes, the story stops being about one journalist—and becomes a stress test for how newsrooms worldwide will use generative AI without destroying public trust.
In early 2026, Peter Vandermeersch, former CEO of Mediahuis Ireland and ex–editor-in-chief of NRC, was suspended from his role as Mediahuis’s Fellow for Journalism and Society. He admitted that quotes in his newsletter and newspaper pieces were invented by generative AI tools and never spoken by the people he cited.[1][3]
This was not an obscure blogger cutting corners. It was a newsroom leader, tasked with exploring the future of journalism, who fell for the very risks he had warned others about.[2][4] The episode offers a hard, necessary lesson: AI can boost editorial productivity, but only if governance and verification remain non‑negotiable.
1. The Incident: How AI Hallucinations Took Down a Senior Journalist
Vandermeersch used tools such as ChatGPT, Perplexity and Google’s NotebookLM to generate “summaries” of reports and background material for his Substack newsletter and for articles in Mediahuis titles.[1][3][4] Instead of treating these outputs as rough notes, he lifted quotes straight from the AI-generated text into publication.
Key facts of the case:
- NRC investigated his blog posts and news articles and found numerous problematic quotes, including in NRC and the Irish Independent.[1][4]
- At least seven people confirmed they had never said the statements attributed to them, proving the quotations were AI hallucinations, not faulty memory.[3][4][5]
- In his Substack post “I am admitting my mistake,” he wrote that he had summarised reports with AI, trusted those summaries as accurate, and “wrongly put words into people’s mouths” instead of treating outputs as paraphrases or interpretation.[2][4]
⚠️ Ethical red line: Direct quotations are a core test of journalistic integrity. Presenting machine‑fabricated sentences as words spoken by real people is fabrication, not a minor error.
Mediahuis CEO Gert Ysebaert stated: “This should never have happened,” stressing that company AI rules require care, human control and transparency, and that ignoring these principles contradicts its promise of reliable journalism.[1][6]
The OECD AI incident database now lists the case as an AI incident involving reputational harm and misinformation, because hallucinations directly led to fabricated quotes being published and later removed in Ireland and the Netherlands.[5] This frames the episode as a systemic failure in AI use, not just individual misconduct.
💡 Mini-conclusion: The scandal did not start with a rogue algorithm; it started with a senior journalist outsourcing judgment to AI outputs and bypassing basic verification.
2. Why This Scandal Matters for Journalism and Public Trust
Vandermeersch was Mediahuis’s first Fellow for Journalism and Society, tasked with exploring journalism’s future and its role in democracy.[2][4] That made the breach symbolic: the figure championing ethical, future‑oriented journalism violated the AI standards he was meant to embody.
Consequences for trust:
- Fabricated quotes in mainstream titles like the Irish Independent and NRC interfered with the public’s right to truthful information and damaged confidence in those brands.[1][3][5]
- Even after corrections, the failure reinforces scepticism among audiences already wary of “mainstream media.”
- In AI risk taxonomies, hallucination‑driven misquotes are a recognised AI incident, harming businesses and the public through misinformation and loss of institutional trust.[5]
The case highlights a dangerous pattern:
- Productivity gains from generative AI—faster research, drafting, translation—tempt editors to skip verification.
- The risk is highest for direct quotations, which readers assume are rigorously checked.[3][5]
Mediahuis responded by reaffirming its AI principles of diligence, human oversight and transparency, signalling that AI governance must sit inside core editorial standards, not as a side policy.[1][6]
Ouest‑France illustrates a complementary approach: it runs interactive conferences to help students and the public understand generative AI, deepfakes and other manipulations, and how to detect them.[7] This points to a dual mandate:
- Use AI responsibly in newsroom workflows.
- Educate audiences about AI‑driven deception and verification.
💼 Mini-conclusion: The scandal is a warning shot. Treating AI as a mere productivity tool, rather than a governance and trust issue, invites similar incidents.
3. Inside the Machine: What AI Hallucinations Are and Why They Happen
To respond effectively, newsrooms must grasp the technical risk. AI hallucinations occur when a large language model generates outputs that are false, misleading or nonsensical, yet presents them confidently as facts.[8][9] In this case, models produced plausible quotes that had never been uttered, and Vandermeersch treated them as genuine.[3]
Why hallucinations happen:
- Gaps and biases in training data.
- Vague or overly broad prompts.
- Weak or missing retrieval from authoritative sources.
- A mismatch between user expectations (“it has read everything”) and what models do: statistical next‑word prediction.[8][9]
📊 Key insight: A generalist LLM does not “remember” a specific report or interview unless that text is explicitly provided. It predicts plausible language patterns, which can include invented citations, quotes and details.[8]
In the Mediahuis case:
- Vandermeersch relied on AI‑generated summaries almost as if they were vetted secondary sources.
- Providers and independent guidance warn that such tools can fabricate citations and quotes that never appear in the underlying documents.[1][8]
Enterprise AI guidance is clear: reliable deployment requires understanding hallucination mechanisms and building explicit prevention measures, because uncorrected hallucinations can disrupt processes and create new operational risks.[9][10]
One widely recommended mitigation is Retrieval‑Augmented Generation (RAG). Here, the model is constrained to answer based on a curated, up‑to‑date knowledge base, grounding its responses instead of relying solely on internal statistics.[10]
flowchart LR
A[User query] --> B[Retriever]
B --> C[Curated documents]
C --> D[LLM generator]
D --> E[Grounded answer]
style C fill:#22c55e,color:#fff
style E fill:#22c55e,color:#fff
Even with RAG, experts insist that human review remains indispensable in high‑stakes domains like journalism. Models can still misinterpret sources, overgeneralise or blend facts into plausible but inaccurate narratives.[9][10]
⚠️ Mini-conclusion: Hallucinations are not a quirky edge case. They are structural in current LLMs, and no tooling removes the need for rigorous human verification.
4. A Governance Playbook: How Newsrooms Should Use Generative AI
To turn this incident into progress, newsrooms need a concrete AI governance playbook that maps onto editorial practice and addresses these risks.
4.1 Define what AI can—and cannot—do
Publishers should codify an AI policy mirroring Mediahuis’s emphasis on care, human control and transparency, but make it operational.[1][5][6]
-
Allowed with oversight:
- Background research and reading aids.
- Idea generation and angles.
- Headline and framing options.
- Draft outlines and translation support.
-
Strictly forbidden:
- Generating quotes or attributions.
- Producing unverifiable facts or “summaries” treated as primary sources.
- Fabricating citations or paraphrases presented as direct speech.
⚡ Policy test: If you would never outsource a task to an unvetted junior freelancer, do not outsource it to a model.
4.2 Make verification non‑negotiable
Given how hallucinations arise, human verification of all AI‑supplied factual claims must be mandatory.[3][10]
Core practices:
- Absolute ban on publishing AI‑generated direct quotations unless checked against primary sources (transcripts, original articles, on‑the‑record interviews).
- Structured fact‑checking workflows that force reporters and editors to cross‑reference AI outputs against multiple independent sources.[8][10]
- Clear responsibility: the bylined journalist and assigning editor remain accountable, regardless of tool use.
flowchart TB
A[AI-assisted draft] --> B[Reporter verification]
B --> C[Editor review]
C --> D[Fact-check / legal]
D --> E{Hallucinations?}
E -- Yes --> B
E -- No --> F[Publish]
style E fill:#f59e0b,color:#000
style F fill:#22c55e,color:#fff
4.3 Train your people, not just your models
Policies only work if journalists understand the tools. Training should cover:[7][9]
- What hallucinations are and why they occur.
- How deepfakes and synthetic media distort reality.
- How to design prompts that minimise ambiguity and reduce hallucinations.
- When to escalate to specialist verification or legal review.
Ouest‑France’s public conferences on AI, misinformation and deepfakes are a useful template: explain where AI gets information, how it can mislead, and why “healthy scepticism” is now basic literacy.[7]
4.4 Build an AI risk management framework
Generative AI use must be treated as governance, not a personal tool choice.[5][9][10]
Key elements:
- Defined roles for AI oversight within editorial leadership.
- Incident reporting channels for suspected AI‑related errors, with protection for whistleblowers.
- Periodic audits of AI‑assisted content to detect patterns of hallucinations or policy drift.
- Regular updates to policies as tools and risks evolve.
💡 Transparency principle: When AI materially assists a piece of journalism, explain how it was used and what safeguards applied, instead of waiting for outside investigations to expose failures.[1][6]
Conclusion: Governing AI Before It Governs You
The Vandermeersch suspension shows that even elite journalists, steeped in newsroom culture and ethics, can be tripped up when they outsource judgment to generative AI and skip basic verification.[2][3] Hallucinations are an inherent risk of current LLMs, not a rare glitch.[8][9]
For publishers, the path forward is to:
- Map where AI already touches editorial workflows.
- Define hard red lines for tasks AI must never perform.
- Embed verification, fact‑checking and incident reporting into daily routines.
- Train journalists and audiences alike in AI literacy and scepticism.[7][9][10]
If you are designing AI policies for a newsroom or media group, use this incident as a live case study. Build governance, review and training programmes now—before the next hallucination appears under your masthead.
Sources & References (9)
- 1Mediahuis suspends former Irish boss Peter Vandermeersch after he admits misuse of AI in new role
Thu 19 Mar 2026 at 19:50 Former chief executive of Mediahuis Ireland Peter Vandermeersch has been suspended from his current role with the publisher after admitting that quotes used in articles he wr...
- 2Mediahuis suspends senior journalist over AI-generated quotes in newsletter
Mediahuis has suspended a senior journalist after he admitted using AI-generated quotes in his work that “wrongly put words in people’s mouths”. Peter Vandermeersch, a senior figure at the publisher ...
- 3Mediahuis suspends senior journalist for using fabricated quotes produced by AI
A SENIOR JOURNALIST at the Mediahuis group, publisher of the Irish Independent and Sunday Independent, has been suspended after it emerged he used quotes fabricated by AI. Peter Vandermeersch, who se...
- 4Senior European journalist suspended over AI-generated quotes
Peter Vandermeersch admitted using AI to ‘wrongly put words into people’s mouths’. Photograph: Mediahuis Peter Vandermeersch admitted using AI to ‘wrongly put words into people’s mouths’. Photograph:...
- 5Senior Journalist Suspended for Publishing AI-Generated Fake Quotes
Peter Vandermeersch, a senior journalist at Mediahuis, was suspended after admitting to publishing newsletters containing AI-generated fake quotes. He relied on language models like ChatGPT and Perple...
- 6Senior European journalist suspended over AI-generated quotes. Peter Vandermeersch says he ‘fell into a trap of hallucinations’, after investigation by the newspaper where he was once editor-in-chief.
Senior European journalist suspended over AI-generated quotes. Peter Vandermeersch says he ‘fell into a trap of hallucinations’, after investigation by the newspaper where he was once editor-in-chief....
- 7L’information à l’heure de l’intelligence artificielle: une conférence proposée par «Ouest-France»
«Ouest-France» propose une conférence interactive à destination des collégiens et lycéens mais pas seulement, afin de mieux déjouer les fausses informations. Aujourd’hui, l’intelligence artificielle(...
- 8Hallucinations de l’IA: le guide complet pour les prévenir
Hallucinations de l’IA: le guide complet pour les prévenir Une hallucination de l’IA se produit lorsqu’un grand modèle de langage(LLM) ou un autre système d’intelligence artificielle générative(GenAI...
- 9La méthode complète pour éviter les hallucinations de l’IA et garantir la fiabilité de vos résultats.
# Comment éviter les hallucinations de l'IA ? Category Gouvernance # La méthode complète pour éviter les hallucinations de l’IA et garantir la fiabilité de vos résultats. Contacter un expert IA Ta...
Generated by CoreProse in 1m 6s
What topic do you want to cover?
Get the same quality with verified sources on any subject.