Articles

  • Anthropic Claude Code npm Source Map Leak: When Packaging Turns into a Security Incident
    📄security

    Anthropic Claude Code npm Source Map Leak: When Packaging Turns into a Security Incident

    When an AI coding tool’s minified JavaScript quietly ships its full TypeScript via npm source maps, it is not just leaking “how the product works.” It can expose: - Model orchestration logic - A...

    9 min1705 words
  • Lovable Vibe Coding Platform Exposes 48 Days of AI Prompts: Multi‑Tenant KV-Cache Failure and How to Fix It
    🌀Hallucinations

    Lovable Vibe Coding Platform Exposes 48 Days of AI Prompts: Multi‑Tenant KV-Cache Failure and How to Fix It

    From Product Darling to Incident Report: What Happened Lovable Vibe was a “lovable” AI coding assistant inside IDE-like workflows. It powered: - Autocomplete, refactors, code reviews - Chat over...

    11 min2126 words
  • Anthropic Mythos AI: Inside the ‘Too Dangerous’ Cybersecurity Model and What Engineers Must Do Next
    🌀Hallucinations

    Anthropic Mythos AI: Inside the ‘Too Dangerous’ Cybersecurity Model and What Engineers Must Do Next

    Anthropic’s Mythos is the first mainstream large language model whose creators publicly argued it was “too dangerous” to release, after internal tests showed it could autonomously surface thousands of...

    11 min2203 words
  • Vercel Breached via Context AI OAuth Supply Chain Attack: A Post‑Mortem for AI Engineering Teams
    📄security

    Vercel Breached via Context AI OAuth Supply Chain Attack: A Post‑Mortem for AI Engineering Teams

    An over‑privileged Context AI OAuth app quietly siphons Vercel environment variables, exposing customer credentials through a compromised AI integration. This is a realistic convergence of AI supply c...

    7 min1408 words
  • AI in Art Galleries: How Machine Intelligence Is Rewriting Curation, Audiences, and the Art Market
    🛡️Safety

    AI in Art Galleries: How Machine Intelligence Is Rewriting Curation, Audiences, and the Art Market

    Artificial intelligence has shifted from spectacle to infrastructure in galleries—powering recommendations, captions, forecasting, and experimental pricing.[1][4] For technical teams and leadership...

    7 min1451 words
  • Comment and Control: How Prompt Injection in Code Comments Can Steal API Keys from Claude Code, Gemini CLI, and GitHub Copilot
    📄security

    Comment and Control: How Prompt Injection in Code Comments Can Steal API Keys from Claude Code, Gemini CLI, and GitHub Copilot

    Code comments used to be harmless notes. With LLM tooling, they’re an execution surface. When Claude Code, Gemini CLI, or GitHub Copilot Agents read your repo, they usually see: > system prompt + de...

    7 min1473 words
  • Brigandi Case: How a $110,000 AI Hallucination Sanction Rewrites Risk for Legal AI Systems
    🌀Hallucinations

    Brigandi Case: How a $110,000 AI Hallucination Sanction Rewrites Risk for Legal AI Systems

    When two lawyers in Oregon filed briefs packed with fake cases and fabricated quotations, the result was not a quirky “AI fail”—it was a $110,000 sanction, dismissal with prejudice, and a public ethic...

    7 min1455 words
  • AI Adoption in Galleries: How Intelligent Systems Are Reshaping Curation, Audiences, and the Art Market
    🛡️Safety
    📷 imgix / Unsplash

    AI Adoption in Galleries: How Intelligent Systems Are Reshaping Curation, Audiences, and the Art Market

    1. Why Galleries Are Accelerating AI Adoption Galleries increasingly treat AI as core infrastructure, not an experiment. Interviews with international managers show AI now supports: - On‑site and on...

    7 min1403 words
  • Stanford AI Index 2026: What 22–94% Hallucination Rates Really Mean for LLM Engineering
    🌀Hallucinations

    Stanford AI Index 2026: What 22–94% Hallucination Rates Really Mean for LLM Engineering

    The latest Stanford AI Index from Stanford HAI reports hallucination rates between 22% and 94% across 26 leading large language models (LLMs). For engineers, this confirms LLMs are structurally unfit...

    7 min1406 words
  • Anthropic Claude Mythos Escape: How a Sandbox-Breaking AI Exposed Decades-Old Security Debt
    🛡️Safety

    Anthropic Claude Mythos Escape: How a Sandbox-Breaking AI Exposed Decades-Old Security Debt

    Anthropic never meant for Claude Mythos Preview to touch the public internet during early testing. Researchers put it in an air‑gapped container and told it to probe that setup: break out and email sa...

    7 min1497 words
  • When AI Hallucinates in Court: Inside Oregon’s $110,000 Vineyard Sanctions Case
    🌀Hallucinations

    When AI Hallucinates in Court: Inside Oregon’s $110,000 Vineyard Sanctions Case

    Two Oregon lawyers thought they were getting a productivity boost. Instead, AI‑generated hallucinations helped kill a $12 million lawsuit, triggered $110,000 in sanctions, and produced one of the cl...

    5 min950 words
  • AI Hallucinations, $110,000 Sanctions, and How to Engineer Safer Legal LLM Systems
    🌀Hallucinations

    AI Hallucinations, $110,000 Sanctions, and How to Engineer Safer Legal LLM Systems

    When a vineyard lawsuit ends in dismissal with prejudice and $110,000 in sanctions because counsel relied on hallucinated case law, that is not just an ethics failure—it is a systems‑design failure.[2...

    4 min880 words

Topics Covered

🌀

AI Hallucinations

Understanding why LLMs invent information and how to prevent it.

🔍

RAG Best Practices

Retrieval Augmented Generation: architectures, chunking, optimal retrieval.

👻

Ghost Sources

When AI cites sources that don't exist. Detection and prevention.

📉

KB Drift

How to detect and correct knowledge base drift.

✂️

Chunking Strategies

Optimal document splitting for better retrieval.

📊

LLM Evaluation

Metrics and methods to evaluate AI response quality.

⚖️

AI Regulation

Laws, regulations and compliance frameworks governing AI systems.

🛡️

AI Safety

Risks, safeguards and best practices for safe AI deployment.

Need a reliable KB for your AI?

CoreProse builds sourced knowledge bases that minimize hallucinations.