Documented AI Incidents
Hallucinations, ghost sources, RAG failures: understand and prevent common AI agent issues.
AI Hallucinations - RAG best practices - Ghost sources - KB Drift - Chunking strategies
Articles
Anthropic Claude Code npm Source Map Leak: When Packaging Turns into a Security Incident
When an AI coding tool’s minified JavaScript quietly ships its full TypeScript via npm source maps, it is not just leaking “how the product works.” It can expose: - Model orchestration logic - A...
9 min1705 wordsLovable Vibe Coding Platform Exposes 48 Days of AI Prompts: Multi‑Tenant KV-Cache Failure and How to Fix It
From Product Darling to Incident Report: What Happened Lovable Vibe was a “lovable” AI coding assistant inside IDE-like workflows. It powered: - Autocomplete, refactors, code reviews - Chat over...
11 min2126 wordsAnthropic Mythos AI: Inside the ‘Too Dangerous’ Cybersecurity Model and What Engineers Must Do Next
Anthropic’s Mythos is the first mainstream large language model whose creators publicly argued it was “too dangerous” to release, after internal tests showed it could autonomously surface thousands of...
11 min2203 wordsVercel Breached via Context AI OAuth Supply Chain Attack: A Post‑Mortem for AI Engineering Teams
An over‑privileged Context AI OAuth app quietly siphons Vercel environment variables, exposing customer credentials through a compromised AI integration. This is a realistic convergence of AI supply c...
7 min1408 wordsAI in Art Galleries: How Machine Intelligence Is Rewriting Curation, Audiences, and the Art Market
Artificial intelligence has shifted from spectacle to infrastructure in galleries—powering recommendations, captions, forecasting, and experimental pricing.[1][4] For technical teams and leadership...
7 min1451 wordsComment and Control: How Prompt Injection in Code Comments Can Steal API Keys from Claude Code, Gemini CLI, and GitHub Copilot
Code comments used to be harmless notes. With LLM tooling, they’re an execution surface. When Claude Code, Gemini CLI, or GitHub Copilot Agents read your repo, they usually see: > system prompt + de...
7 min1473 wordsBrigandi Case: How a $110,000 AI Hallucination Sanction Rewrites Risk for Legal AI Systems
When two lawyers in Oregon filed briefs packed with fake cases and fabricated quotations, the result was not a quirky “AI fail”—it was a $110,000 sanction, dismissal with prejudice, and a public ethic...
7 min1455 wordsAI Adoption in Galleries: How Intelligent Systems Are Reshaping Curation, Audiences, and the Art Market
1. Why Galleries Are Accelerating AI Adoption Galleries increasingly treat AI as core infrastructure, not an experiment. Interviews with international managers show AI now supports: - On‑site and on...
7 min1403 wordsStanford AI Index 2026: What 22–94% Hallucination Rates Really Mean for LLM Engineering
The latest Stanford AI Index from Stanford HAI reports hallucination rates between 22% and 94% across 26 leading large language models (LLMs). For engineers, this confirms LLMs are structurally unfit...
7 min1406 wordsAnthropic Claude Mythos Escape: How a Sandbox-Breaking AI Exposed Decades-Old Security Debt
Anthropic never meant for Claude Mythos Preview to touch the public internet during early testing. Researchers put it in an air‑gapped container and told it to probe that setup: break out and email sa...
7 min1497 wordsWhen AI Hallucinates in Court: Inside Oregon’s $110,000 Vineyard Sanctions Case
Two Oregon lawyers thought they were getting a productivity boost. Instead, AI‑generated hallucinations helped kill a $12 million lawsuit, triggered $110,000 in sanctions, and produced one of the cl...
5 min950 wordsAI Hallucinations, $110,000 Sanctions, and How to Engineer Safer Legal LLM Systems
When a vineyard lawsuit ends in dismissal with prejudice and $110,000 in sanctions because counsel relied on hallucinated case law, that is not just an ethics failure—it is a systems‑design failure.[2...
4 min880 words
Topics Covered
AI Hallucinations
Understanding why LLMs invent information and how to prevent it.
RAG Best Practices
Retrieval Augmented Generation: architectures, chunking, optimal retrieval.
Ghost Sources
When AI cites sources that don't exist. Detection and prevention.
KB Drift
How to detect and correct knowledge base drift.
Chunking Strategies
Optimal document splitting for better retrieval.
LLM Evaluation
Metrics and methods to evaluate AI response quality.
AI Regulation
Laws, regulations and compliance frameworks governing AI systems.
AI Safety
Risks, safeguards and best practices for safe AI deployment.
Need a reliable KB for your AI?
CoreProse builds sourced knowledge bases that minimize hallucinations.