When two lawyers in Oregon filed briefs packed with fake cases and fabricated quotations, the result was not a quirky “AI fail”—it was a $110,000 sanction, dismissal with prejudice, and a public ethics disaster. [1][5]
For ML and platform engineers, the Brigandi matter is a concrete signal: if your system can move unverified model output into court-facing documents, your organization is in the blast radius. [1][5]
💼 Engineering lens: Treat this case as an incident postmortem on an entire socio-technical stack—model, UX, validation, logging, and governance—not just a story about one careless prompt.
1. What Actually Happened in the Brigandi Case (and Why Engineers Should Care)
U.S. Magistrate Judge Mark D. Clarke sanctioned San Diego attorney Stephen Brigandi and Portland attorney Tim Murphy a combined $110,000 for filing AI-assisted briefs that included 15 non-existent cases and eight fabricated quotations. [1][6]
Key facts:
- Judge Clarke called it “a notorious outlier in both degree and volume” of AI misuse and faulted plaintiffs and counsel for not being “adequately forthcoming, candid or apologetic.” [1][6]
- The dispute involved the Valley View winery in Oregon: Joanne Couvrette sued her brothers for control, alleging elder abuse and wrongful enrichment and seeking $12 million. [1][5][6]
- Brigandi, not licensed in Oregon, worked with Murphy, who appeared procedurally; both were sanctioned because they signed filings that put AI-generated citations into the federal record. [1][3]
- The case was dismissed with prejudice; the briefs were “replete with citations from non-existent cases,” and the court noted evidence of a “cover-up” when false references were deleted and refiled without disclosure. [4][5][6]
⚠️ Key shift: This is now a concrete example of how unverified LLM outputs in a regulated workflow can create direct financial liability and reputational damage for anyone deploying such tools. [1][5]
2. Where AI Hallucinations Enter Legal Workflows
The technical failure is familiar to anyone working with large language models: when asked for supporting authority, the model confidently produced plausible-looking but fake citations and quotations. [1][9]
How hallucinations got into the briefs:
- The filings were described as “replete with citations from non-existent cases,” suggesting use of AI as an authority generator, not as a retrieval-first assistant. [5][8]
- Judge Clarke noted that an AI tool “once again led human minds astray,” reflecting a misaligned mental model: lawyers treated outputs as authoritative legal text, while the model only sampled likely tokens. [5][7]
💡 Architectural anti-pattern: Letting an LLM fabricate structured legal objects—case names, reporter citations, docket numbers—without deterministic validation is fundamentally unsafe in law and similar domains.
Common risky prompts:
- “Find cases that say X” without retrieval.
- “Fill in” missing citation details from memory.
- Trusting model summaries of cases it just invented.
Without retrieval-augmented generation (RAG) over authoritative case law, strict schema validation, and live lookups to legal databases, even strong models will confidently hallucinate rare or non-existent precedents, especially on niche issues. [9]
📊 Implication: Production legal tools must treat the LLM as a language layer over a verifiable database of law, never as a standalone source of truth for anything that might be filed in court. [5]
3. Designing Verification-First Architectures for Legal Citations
The Oregon sanctions flowed directly from non-existent cases being presented as real. Any serious legal AI system must treat “every cited authority exists and is correctly referenced” as a hard invariant. [4][9]
A robust division of labor:
- Retrieval-only for authorities. Cases, statutes, and regulations come only from a vetted corpus or commercial provider.
- LLM-only for narrative. The model summarizes and reasons over retrieved materials but never invents citations or alters reporter identifiers.
Implementation patterns:
- Parse every citation the model emits.
- Normalize it (e.g., Bluebook-style fields) into structured objects.
- Cross-check against a legal database API; unresolved citations are blocked or clearly flagged.
💡 Schema-first output
Use structured outputs (JSON/XML) such as:
{
"argument_sections": [...],
"citations": [
{
"id": "doc_123456",
"case_name": "Smith v. Jones",
"reporter": "F.3d",
"volume": 999,
"page": 123
}
]
}
Validate doc_123456 against your authority index before rendering a formatted brief.
For Brigandi-style workloads, a pre-submission gate should hard-block export if even a single citation fails validation, forcing manual review before anything leaves the system. [1][5]
⚡ Containment, not perfection: These guardrails do not stop the model from hallucinating internally, but they ensure fabricated content cannot cross the system boundary into actual court filings.
4. Governance, Logging, and Accountability in High-Risk Domains
Judge Clarke criticized the plaintiffs and their counsel for lacking candor and highlighted an attempted cover-up once the bogus citations were exposed. [1][4]
He also noted circumstantial evidence that Couvrette herself may have generated some AI drafts, but held the attorneys responsible because they signed the filings. [5][6]
For engineering teams, this demands a trustworthy audit trail showing who did what, with which tool, and when.
Minimum logging for a legal AI platform:
- User identity and role.
- Model version and tool configuration.
- Prompt templates and raw prompts.
- Full prompt–completion pairs for any court-facing draft.
Role-based controls and workflow constraints:
- Require human review and sign-off for any filing-ready document.
- Persistent UI disclaimers that outputs are drafts requiring independent verification.
- Restrict high-risk features (e.g., authority generation) to trained users.
📊 Risk monitoring: Build alerts for:
- Unusually high numbers of new authorities in a single matter.
- Repeated citation-validation failures.
- Users bypassing suggested review paths.
These governance and observability practices allow organizations, when AI errors occur—as in the Oregon vineyard lawsuit—to show process discipline rather than negligence. [5][10]
5. Implementation Blueprint: Safer Legal AI Systems After Brigandi
In Brigandi, hallucinations produced case-ending sanctions and a six-figure penalty that dwarfed prior Oregon appellate sanctions, where the largest had been $10,000. [1][5][6]
Legaltech engineers should assume similar exposure wherever unverified AI text can reach a court, regulator, or opposing counsel, and ensure filing-ready documents emerge only after checks and human review.
A pragmatic stack:
- Vector database over vetted opinions (e.g., Elasticsearch, Qdrant, pgvector) powering RAG for case discovery.
- Authority index keyed by citation and document ID for deterministic lookup.
- LLM layer limited to summarization, comparison, and reasoning over retrieved documents.
- Validation service that inspects drafts, resolves every citation, and blocks or annotates unresolved references.
To help stakeholders visualize this, it is useful to model the end-to-end workflow from first draft to filing, showing exactly where retrieval, validation, and human review prevent hallucinated citations from escaping into the record.
flowchart LR
title Verification-First Legal AI Workflow to Prevent Hallucinated Citations
A[Lawyer drafts] --> B[Query AI assistant]
B --> C[Retrieve corpus]
C --> D[LLM drafts narrative]
D --> E[Validate citations]
E --> F{Unresolved cites?}
F -- Yes --> G[Manual review]
F -- No --> H[Court filing]
style C fill:#3b82f6,color:#ffffff
style E fill:#22c55e,color:#ffffff
style F fill:#f59e0b,color:#000000
style G fill:#ef4444,color:#ffffff
style H fill:#22c55e,color:#ffffff
💡 Evaluation under pressure
Before deployment, run offline tests where you:
- Prompt the model for obscure or adversarial citations.
- Force edge cases like “find a Ninth Circuit case that says X” when none exists.
- Push outputs through your verification pipeline and log residual hallucination rates.
Use results to set conservative thresholds—for example, no unverified citations in auto-export mode; drafts with unresolved items must be watermarked and limited to internal use.
To avoid Brigandi-style failures, roll out capabilities gradually:
- Start with internal research memos and email summaries.
- Move to low-stakes filings (routine discovery motions, status reports).
- Only then enable AI-assisted drafting for dispositive motions or appellate briefs. [5][4]
⚠️ Documentation is part of the product
Maintain clear, versioned documentation of:
- Model choices and training constraints.
- Guardrails and validation logic.
- Operational limits and recommended use cases.
If a judge or regulator later scrutinizes your tooling, you want to show the system was intentionally engineered to minimize hallucination-driven harm, not casually bolted onto billable workflows.
Conclusion: Designing for Hallucinations, Not Around Them
The Brigandi sanctions turn AI hallucinations from a modeling quirk into a quantified operational risk in legal practice: one incident, $110,000 in penalties, and a case dismissed with prejudice. [1][5]
The root failure was architectural: the model was treated as an authority instead of as a language layer on top of verifiable legal data.
A safer, verification-first design includes:
- Grounded retrieval from authoritative corpora.
- Strict citation validation and schema-constrained outputs.
- Mandatory human review before filing.
- Governance, logging, and monitoring that establish accountability.
⚡ Action step: If you design or operate legal AI tools, use this case as a checklist. Audit every path by which unverified authorities might escape your system, add retrieval and validation layers, and stress-test workflows with adversarial prompts long before they touch live matters or real clients.
Sources & References (5)
- 1Federal judge hands down $110K penalty against 2 lawyers for AI errors in court documents
By Amanda Robert April 17, 2026 A federal judge in Oregon has imposed $110,000 in fines and attorney fees against two lawyers who filed documents filled with fake cases and fabricated citations. “In...
- 2Use of AI cost lawyers $110,000 in Oregon lawsuit
A federal judge in Oregon squashed a vineyard lawsuit after determining that two lawyers’ AI-assisted court filings were replete with citations from non-existent cases — and one lawyer had attempted a...
- 3AI hallucinations cost lawyers $110,000 in Oregon vineyard lawsuit
A federal judge in Oregon squashed a vineyard lawsuit after determining that two lawyers’ AI-assisted court filings were replete with citations from non-existent cases — and one lawyer had attempted a...
- 4A federal judge in Oregon squashed a vineyard lawsuit after determining that two lawyers’ AI-assisted court filings were replete with citations from non-existent cases — and one lawyer had attempted a “cover-up” when the bogus material was uncovered.
A federal judge in Oregon squashed a vineyard lawsuit after determining that two lawyers’ AI-assisted court filings were replete with citations from non-existent cases — and one lawyer had attempted a...
- 5AI hallucinations cost lawyers $110,000 in Oregon vineyard lawsuit
A federal judge in Oregon squashed a vineyard lawsuit after determining that two lawyers’ AI-assisted court filings were replete with citations from non-existent cases — and one lawyer had attempted a...
Generated by CoreProse in 3m 16s
What topic do you want to cover?
Get the same quality with verified sources on any subject.