[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"kb-article-how-lawyers-got-sanctioned-for-ai-hallucinations-and-how-to-engineer-safer-legal-llm-systems-en":3,"ArticleBody_cVXoENI4Ex4BW5YnqawFJynjRn5p1PVGktPRN0ieqk":105},{"article":4,"relatedArticles":75,"locale":65},{"id":5,"title":6,"slug":7,"content":8,"htmlContent":9,"excerpt":10,"category":11,"tags":12,"metaDescription":10,"wordCount":13,"readingTime":14,"publishedAt":15,"sources":16,"sourceCoverage":58,"transparency":59,"seo":64,"language":65,"featuredImage":66,"featuredImageCredit":67,"isFreeGeneration":71,"niche":72,"geoTakeaways":58,"geoFaq":58,"entities":58},"69e1e509292a31548fe951c7","How Lawyers Got Sanctioned for AI Hallucinations—and How to Engineer Safer Legal LLM Systems","how-lawyers-got-sanctioned-for-ai-hallucinations-and-how-to-engineer-safer-legal-llm-systems","When a New York lawyer was fined for filing a brief full of non‑existent cases generated by ChatGPT, it showed a deeper issue: unconstrained generative models are being dropped into workflows that assume every citation is real, citable law.[6]\n\nFor ML engineers building legal tools, that is a systems‑engineering and governance failure, not just a UX mistake.\n\nThis guide treats “lawyers sanctioned for AI‑fabricated court citations” as an engineering failure mode and explains how to design retrieval, verification, and policy layers so partners can trust what they sign.\n\n---\n\n## 1. From Viral Sanctions to a Systemic Risk Pattern\n\nIn *Mata v. Avianca* (2023), a lawyer was sanctioned $5,000 after submitting a ChatGPT‑drafted brief with six fabricated cases—the classic example of LLM hallucinations in litigation.[6] The core error: treating ChatGPT as an authority generator without verification.\n\n📊 **Pattern, not anecdote**\n\n- Courts have imposed over $31,000 in sanctions for AI‑tainted filings, and 300+ judges now require explicit AI citation verification in standing orders.[6]  \n- Courts frame LLM misuse as a governance lapse, not experimentation.\n\nOutside litigation:\n\n- Deloitte Australia partially refunded a AU$440,000 engagement after a government report was found to contain fabricated citations and a fake quote from a federal court judgment, linked to generative‑AI drafting.[11][12]  \n- Officials had to reissue the report after removing fictitious references and repairing the reference list, despite prior human review.[11][12]\n\n💼 **Anecdote from the trenches**\n\n- At a 30‑lawyer boutique, an AI‑assisted memo had two real cases and one non‑existent one.  \n- The partner re‑researched the memo, banned raw model citations, and demanded verifiable workflows.\n\nEmpirical and policy context:\n\n- Stanford researchers found GPT‑4 hallucinated legal facts 58% of the time on verifiable federal‑case questions, so “ask the model for cases” is predictably unsafe at scale.[6]  \n- The White House’s emerging AI framework tends toward federal preemption on AI *development* but shifts liability toward deployment and use, pushing firms to adopt internal controls.[10]\n\n⚠️ **Section takeaway:** sanctions, refunds, empirical results, and policy trends all trace back to one cause—unconstrained text generators embedded in authority‑critical workflows without engineered verification.[6][11][12]\n\n---\n\n## 2. Why LLMs Hallucinate—And Why Legal Citations Are a Perfect Failure Mode\n\nLLMs are generative sequence models, not databases; they extend text based on learned patterns.[1] When asked “give me three Supreme Court cases holding X,” the model optimizes for plausible‑looking output, not existence or correctness.[4]\n\n📊 **Types of hallucinations in law**[2]\n\n- **Factual:** wrong statements about the world (non‑existent cases, incorrect holdings).  \n- **Intrinsic:** contradicting provided context (e.g., misreading uploaded opinions).  \n- **Extrinsic:** adding unverifiable claims beyond the given context.\n\nFabricated citations are factual hallucinations when the case does not exist, and intrinsic ones when the LLM contradicts an uploaded database export.[2]\n\nKey drivers of hallucinations:[3][4]\n\n- No built‑in fact‑checking or retrieval.  \n- Gaps and biases in training data.  \n- Overfitting to stylistic patterns (legalese, citation formats).\n\nWhy law is especially vulnerable:\n\n- Case names and reporters follow highly regular formats, so models can generate citations that *look* perfect but refer to nothing or misstate holdings.[3]\n\n💡 **No built‑in provenance**\n\n- LLMs do not emit verified sources by default; their text is unconstrained extrapolation.[4]  \n- Legal practice demands every proposition of law be traceable to authority; LLM behavior is misaligned with that norm.\n\nSecurity angle:\n\n- Prompt injection and context poisoning can push models to include bogus or malicious “authorities,” especially when users can influence retrieval context.[5]  \n- In education and law, hallucinations become security and compliance risks, not just accuracy issues, akin to mishandled FERPA\u002FCOPPA‑protected data.[8]\n\n⚠️ **Section takeaway:** hallucinations stem from how LLMs generate text, and legal citation workflows are uniquely exposed because they combine pattern‑heavy text with strict provenance requirements.[1][2][3][4][5]\n\n---\n\n## 3. Designing Legal-Grade LLM Pipelines: Retrieval, Grounding, and Verification\n\nNo single guardrail prevents hallucinations. High‑stakes frameworks recommend combining retrieval‑augmented generation (RAG), structured prompting, and post‑hoc verification.[1]\n\n### 3.1 Retrieval-first architecture\n\nShift from “invent cases” to “reason over retrieved authorities”:\n\n1. **Query normalization:** turn the lawyer’s question into a search query.  \n2. **Retrieval:** search official reporters or vetted internal databases (hybrid vector + keyword).  \n3. **Context packaging:** chunk, rank, and pass only relevant excerpts to the LLM.  \n4. **Grounded answer:** strictly instruct the model to use *only* supplied documents.\n\nEvaluation work stresses:\n\n- Measure retrieval precision\u002Frecall and chunking quality; weak retrieval silently degrades citation accuracy even with a strong model.[3][4]\n\n💡 **Prompting pattern**\n\n> “You are a legal research assistant. Use ONLY the provided authorities.  \n> If a proposition is not supported, say ‘No supporting authority in the provided materials.’  \n> For every cited holding, quote and pin‑cite the exact passage.”\n\n### 3.2 Claim-level grounding verification\n\nGrounding verification extracts atomic factual claims and checks each against the corpus.[2] For legal use:\n\n- Parse output into claims (e.g., “Smith v. Jones held X in 2019 in the Second Circuit”).  \n- For each claim, search for matching case, reporter, and proposition.  \n- Mark claims as grounded or unverified; attach snippets as evidence.\n\nAdd symbolic checks:\n\n- Regex validation of reporter formats and docket numbers.  \n- Model‑based consistency scoring, as shown in open‑source hallucination detection wrappers around LLM calls.[2][4]\n\n📊 **Cost and energy**\n\n- Data centers already consume substantial electricity, with AI demand projected to rise sharply by 2030.[7]  \n- Favor efficient pipelines—targeted retrieval plus selective verification—over brute‑force re‑queries; this reduces latency, cost, and energy while managing risk.[7]\n\n### 3.3 Auditability and logging\n\nOWASP’s LLM checklist emphasizes logging prompts, retrieved sources, and verification decisions to answer: “Are outputs factual and worth applying?”[9] For legal systems:\n\n- Log retrieval IDs, versions, and timestamps.  \n- Store verification reports listing grounded vs. unverified claims.  \n- Link final filed documents back to any AI‑assisted drafts.\n\n⚡ **Section takeaway:** design systems where the model never free‑forms law; it reasons over retrieved authorities, and a verification layer proves which claims are grounded.[1][2][3][4][7][9]\n\n---\n\n## 4. Testing, Red Teaming, and Operational Guardrails for Law Firms\n\nStrong architecture still fails without rigorous testing.\n\nRed‑teaming work shows: an agent with 85% step‑level accuracy has ~20% chance of correctly finishing a 10‑step task—similar to multi‑step legal drafting.[6] Small hallucination risks compound.\n\n💼 **Offline evaluation**\n\nTools like Deepchecks stress:[3]\n\n- Benchmark questions with known correct authorities.  \n- Use metrics such as F1 on citation correctness, plus human legal review.  \n- Track “grounding failures” separately from style\u002Fformat issues.\n\nMetrics‑first frameworks recommend:\n\n- Maintain a hallucination index across model versions and prompts.  \n- Detect regressions when a new prompt or model subtly increases fabricated citations.[4]\n\n⚠️ **Adversarial scenarios**\n\nSecurity guidance recommends simulating:[5][9]\n\n- Prompt injection inserting fake authorities.  \n- Context poisoning via user‑uploaded “case law” PDFs.  \n- Attempts to bypass verification instructions.\n\nThe Deloitte incident illustrates why red teams must target references and footnotes: fabricated papers and misattributed judgments survived initial review but failed deeper citation checks.[11][12]\n\nBorrowing from K–12 AI readiness, firms can require multi‑step approval for new LLM tools: technical vetting, legal\u002Fcompliance review, budget checks, and data‑privacy agreements before those tools touch client matters.[8]\n\n💡 **Section takeaway:** treat legal LLMs as critical infrastructure—red‑team full workflows, monitor hallucinations continuously, and gate production access behind structured evaluations.[3][4][5][6][8][9][11][12]\n\n---\n\n## 5. Policy, Governance, and Human-in-the-Loop Responsibilities\n\nThe White House framework signals regulation will emphasize deployment accountability, making firm‑level LLM policies central to managing liability.[10]\n\nOWASP frames LLM governance as a shared duty across executives, cybersecurity, privacy, compliance, and legal leaders.[9] Large firms should map this directly onto AI‑assisted research tooling.\n\n📊 **What governance must cover**\n\nSecurity experts argue “trustworthy AI” requires demonstrable processes, not just vendor assurances.[5][9] Policies for legal LLMs should define:\n\n- Approved data sources (official reporters, vetted internal repositories).  \n- Mandatory verification steps for any AI‑generated citation.  \n- Logging and retention for audit trails.  \n- Incident‑response playbooks when hallucinations reach clients or courts.\n\nEducation technology leaders demand data‑privacy agreements and staged approvals for classroom AI tools; bar regulators and courts are beginning to expect similar rigor from lawyers using generative AI.[8]\n\nThe Deloitte refund suggests future consulting and legal contracts will add AI‑usage clauses: disclosure duties, verification standards, and fee clawbacks when hallucinations taint work product.[11]\n\n⚠️ **Humans stay on the hook**\n\nMulti‑layer hallucination frameworks stress that experts must remain in the loop for high‑stakes domains.[1] For lawyers, that implies:\n\n> Every AI‑proposed citation is independently validated against primary sources before it appears in a filing.\n\n⚡ **Section takeaway:** engineering controls succeed only when backed by enforceable policy, contractual clarity, and explicit human responsibilities.[1][5][8][9][10][11]\n\n---\n\n## Conclusion: Turn LLMs from Liability to Legal Infrastructure\n\nSanctioned lawyers and embarrassed consultants share the same root cause: unconstrained generative models deployed in workflows that demand verifiable authority.[6][11][12]\n\nThe engineering response:\n\n- Build **retrieval‑first architectures** grounded in authoritative corpora.[1][4]  \n- Add **claim‑level grounding verification** to every citation‑bearing output.[2]  \n- Run **metrics‑driven evaluation and adversarial red‑teaming** before production.[3][4][6]  \n- Wrap everything in **OWASP‑style governance and human‑in‑the‑loop review**.[1][8][9][10]\n\nIf you design LLM tools for lawyers, start by defining your citation‑verification guarantees, then work backward: choose your retrieval corpus, build claim‑checking pipelines, and enforce human review standards that align with courts, regulators, and clients.","\u003Cp>When a New York lawyer was fined for filing a brief full of non‑existent cases generated by ChatGPT, it showed a deeper issue: unconstrained generative models are being dropped into workflows that assume every citation is real, citable law.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>For ML engineers building legal tools, that is a systems‑engineering and governance failure, not just a UX mistake.\u003C\u002Fp>\n\u003Cp>This guide treats “lawyers sanctioned for AI‑fabricated court citations” as an engineering failure mode and explains how to design retrieval, verification, and policy layers so partners can trust what they sign.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>1. From Viral Sanctions to a Systemic Risk Pattern\u003C\u002Fh2>\n\u003Cp>In \u003Cem>Mata v. Avianca\u003C\u002Fem> (2023), a lawyer was sanctioned $5,000 after submitting a ChatGPT‑drafted brief with six fabricated cases—the classic example of LLM hallucinations in litigation.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa> The core error: treating ChatGPT as an authority generator without verification.\u003C\u002Fp>\n\u003Cp>📊 \u003Cstrong>Pattern, not anecdote\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Courts have imposed over $31,000 in sanctions for AI‑tainted filings, and 300+ judges now require explicit AI citation verification in standing orders.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Courts frame LLM misuse as a governance lapse, not experimentation.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Outside litigation:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Deloitte Australia partially refunded a AU$440,000 engagement after a government report was found to contain fabricated citations and a fake quote from a federal court judgment, linked to generative‑AI drafting.\u003Ca href=\"#source-11\" class=\"citation-link\" title=\"View source [11]\">[11]\u003C\u002Fa>\u003Ca href=\"#source-12\" class=\"citation-link\" title=\"View source [12]\">[12]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Officials had to reissue the report after removing fictitious references and repairing the reference list, despite prior human review.\u003Ca href=\"#source-11\" class=\"citation-link\" title=\"View source [11]\">[11]\u003C\u002Fa>\u003Ca href=\"#source-12\" class=\"citation-link\" title=\"View source [12]\">[12]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>💼 \u003Cstrong>Anecdote from the trenches\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>At a 30‑lawyer boutique, an AI‑assisted memo had two real cases and one non‑existent one.\u003C\u002Fli>\n\u003Cli>The partner re‑researched the memo, banned raw model citations, and demanded verifiable workflows.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Empirical and policy context:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Stanford researchers found GPT‑4 hallucinated legal facts 58% of the time on verifiable federal‑case questions, so “ask the model for cases” is predictably unsafe at scale.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>The White House’s emerging AI framework tends toward federal preemption on AI \u003Cem>development\u003C\u002Fem> but shifts liability toward deployment and use, pushing firms to adopt internal controls.\u003Ca href=\"#source-10\" class=\"citation-link\" title=\"View source [10]\">[10]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>⚠️ \u003Cstrong>Section takeaway:\u003C\u002Fstrong> sanctions, refunds, empirical results, and policy trends all trace back to one cause—unconstrained text generators embedded in authority‑critical workflows without engineered verification.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003Ca href=\"#source-11\" class=\"citation-link\" title=\"View source [11]\">[11]\u003C\u002Fa>\u003Ca href=\"#source-12\" class=\"citation-link\" title=\"View source [12]\">[12]\u003C\u002Fa>\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>2. Why LLMs Hallucinate—And Why Legal Citations Are a Perfect Failure Mode\u003C\u002Fh2>\n\u003Cp>LLMs are generative sequence models, not databases; they extend text based on learned patterns.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa> When asked “give me three Supreme Court cases holding X,” the model optimizes for plausible‑looking output, not existence or correctness.\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>📊 \u003Cstrong>Types of hallucinations in law\u003C\u002Fstrong>\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>\u003Cstrong>Factual:\u003C\u002Fstrong> wrong statements about the world (non‑existent cases, incorrect holdings).\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Intrinsic:\u003C\u002Fstrong> contradicting provided context (e.g., misreading uploaded opinions).\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Extrinsic:\u003C\u002Fstrong> adding unverifiable claims beyond the given context.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Fabricated citations are factual hallucinations when the case does not exist, and intrinsic ones when the LLM contradicts an uploaded database export.\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>Key drivers of hallucinations:\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>No built‑in fact‑checking or retrieval.\u003C\u002Fli>\n\u003Cli>Gaps and biases in training data.\u003C\u002Fli>\n\u003Cli>Overfitting to stylistic patterns (legalese, citation formats).\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Why law is especially vulnerable:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Case names and reporters follow highly regular formats, so models can generate citations that \u003Cem>look\u003C\u002Fem> perfect but refer to nothing or misstate holdings.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>💡 \u003Cstrong>No built‑in provenance\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>LLMs do not emit verified sources by default; their text is unconstrained extrapolation.\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Legal practice demands every proposition of law be traceable to authority; LLM behavior is misaligned with that norm.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Security angle:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Prompt injection and context poisoning can push models to include bogus or malicious “authorities,” especially when users can influence retrieval context.\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>In education and law, hallucinations become security and compliance risks, not just accuracy issues, akin to mishandled FERPA\u002FCOPPA‑protected data.\u003Ca href=\"#source-8\" class=\"citation-link\" title=\"View source [8]\">[8]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>⚠️ \u003Cstrong>Section takeaway:\u003C\u002Fstrong> hallucinations stem from how LLMs generate text, and legal citation workflows are uniquely exposed because they combine pattern‑heavy text with strict provenance requirements.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>3. Designing Legal-Grade LLM Pipelines: Retrieval, Grounding, and Verification\u003C\u002Fh2>\n\u003Cp>No single guardrail prevents hallucinations. High‑stakes frameworks recommend combining retrieval‑augmented generation (RAG), structured prompting, and post‑hoc verification.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003C\u002Fp>\n\u003Ch3>3.1 Retrieval-first architecture\u003C\u002Fh3>\n\u003Cp>Shift from “invent cases” to “reason over retrieved authorities”:\u003C\u002Fp>\n\u003Col>\n\u003Cli>\u003Cstrong>Query normalization:\u003C\u002Fstrong> turn the lawyer’s question into a search query.\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Retrieval:\u003C\u002Fstrong> search official reporters or vetted internal databases (hybrid vector + keyword).\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Context packaging:\u003C\u002Fstrong> chunk, rank, and pass only relevant excerpts to the LLM.\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Grounded answer:\u003C\u002Fstrong> strictly instruct the model to use \u003Cem>only\u003C\u002Fem> supplied documents.\u003C\u002Fli>\n\u003C\u002Fol>\n\u003Cp>Evaluation work stresses:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Measure retrieval precision\u002Frecall and chunking quality; weak retrieval silently degrades citation accuracy even with a strong model.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>💡 \u003Cstrong>Prompting pattern\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n\u003Cp>“You are a legal research assistant. Use ONLY the provided authorities.\u003Cbr>\nIf a proposition is not supported, say ‘No supporting authority in the provided materials.’\u003Cbr>\nFor every cited holding, quote and pin‑cite the exact passage.”\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\u003Ch3>3.2 Claim-level grounding verification\u003C\u002Fh3>\n\u003Cp>Grounding verification extracts atomic factual claims and checks each against the corpus.\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa> For legal use:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Parse output into claims (e.g., “Smith v. Jones held X in 2019 in the Second Circuit”).\u003C\u002Fli>\n\u003Cli>For each claim, search for matching case, reporter, and proposition.\u003C\u002Fli>\n\u003Cli>Mark claims as grounded or unverified; attach snippets as evidence.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Add symbolic checks:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Regex validation of reporter formats and docket numbers.\u003C\u002Fli>\n\u003Cli>Model‑based consistency scoring, as shown in open‑source hallucination detection wrappers around LLM calls.\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>📊 \u003Cstrong>Cost and energy\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Data centers already consume substantial electricity, with AI demand projected to rise sharply by 2030.\u003Ca href=\"#source-7\" class=\"citation-link\" title=\"View source [7]\">[7]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Favor efficient pipelines—targeted retrieval plus selective verification—over brute‑force re‑queries; this reduces latency, cost, and energy while managing risk.\u003Ca href=\"#source-7\" class=\"citation-link\" title=\"View source [7]\">[7]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch3>3.3 Auditability and logging\u003C\u002Fh3>\n\u003Cp>OWASP’s LLM checklist emphasizes logging prompts, retrieved sources, and verification decisions to answer: “Are outputs factual and worth applying?”\u003Ca href=\"#source-9\" class=\"citation-link\" title=\"View source [9]\">[9]\u003C\u002Fa> For legal systems:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Log retrieval IDs, versions, and timestamps.\u003C\u002Fli>\n\u003Cli>Store verification reports listing grounded vs. unverified claims.\u003C\u002Fli>\n\u003Cli>Link final filed documents back to any AI‑assisted drafts.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>⚡ \u003Cstrong>Section takeaway:\u003C\u002Fstrong> design systems where the model never free‑forms law; it reasons over retrieved authorities, and a verification layer proves which claims are grounded.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003Ca href=\"#source-7\" class=\"citation-link\" title=\"View source [7]\">[7]\u003C\u002Fa>\u003Ca href=\"#source-9\" class=\"citation-link\" title=\"View source [9]\">[9]\u003C\u002Fa>\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>4. Testing, Red Teaming, and Operational Guardrails for Law Firms\u003C\u002Fh2>\n\u003Cp>Strong architecture still fails without rigorous testing.\u003C\u002Fp>\n\u003Cp>Red‑teaming work shows: an agent with 85% step‑level accuracy has ~20% chance of correctly finishing a 10‑step task—similar to multi‑step legal drafting.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa> Small hallucination risks compound.\u003C\u002Fp>\n\u003Cp>💼 \u003Cstrong>Offline evaluation\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>Tools like Deepchecks stress:\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Benchmark questions with known correct authorities.\u003C\u002Fli>\n\u003Cli>Use metrics such as F1 on citation correctness, plus human legal review.\u003C\u002Fli>\n\u003Cli>Track “grounding failures” separately from style\u002Fformat issues.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Metrics‑first frameworks recommend:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Maintain a hallucination index across model versions and prompts.\u003C\u002Fli>\n\u003Cli>Detect regressions when a new prompt or model subtly increases fabricated citations.\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>⚠️ \u003Cstrong>Adversarial scenarios\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>Security guidance recommends simulating:\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003Ca href=\"#source-9\" class=\"citation-link\" title=\"View source [9]\">[9]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Prompt injection inserting fake authorities.\u003C\u002Fli>\n\u003Cli>Context poisoning via user‑uploaded “case law” PDFs.\u003C\u002Fli>\n\u003Cli>Attempts to bypass verification instructions.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>The Deloitte incident illustrates why red teams must target references and footnotes: fabricated papers and misattributed judgments survived initial review but failed deeper citation checks.\u003Ca href=\"#source-11\" class=\"citation-link\" title=\"View source [11]\">[11]\u003C\u002Fa>\u003Ca href=\"#source-12\" class=\"citation-link\" title=\"View source [12]\">[12]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>Borrowing from K–12 AI readiness, firms can require multi‑step approval for new LLM tools: technical vetting, legal\u002Fcompliance review, budget checks, and data‑privacy agreements before those tools touch client matters.\u003Ca href=\"#source-8\" class=\"citation-link\" title=\"View source [8]\">[8]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>💡 \u003Cstrong>Section takeaway:\u003C\u002Fstrong> treat legal LLMs as critical infrastructure—red‑team full workflows, monitor hallucinations continuously, and gate production access behind structured evaluations.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003Ca href=\"#source-8\" class=\"citation-link\" title=\"View source [8]\">[8]\u003C\u002Fa>\u003Ca href=\"#source-9\" class=\"citation-link\" title=\"View source [9]\">[9]\u003C\u002Fa>\u003Ca href=\"#source-11\" class=\"citation-link\" title=\"View source [11]\">[11]\u003C\u002Fa>\u003Ca href=\"#source-12\" class=\"citation-link\" title=\"View source [12]\">[12]\u003C\u002Fa>\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>5. Policy, Governance, and Human-in-the-Loop Responsibilities\u003C\u002Fh2>\n\u003Cp>The White House framework signals regulation will emphasize deployment accountability, making firm‑level LLM policies central to managing liability.\u003Ca href=\"#source-10\" class=\"citation-link\" title=\"View source [10]\">[10]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>OWASP frames LLM governance as a shared duty across executives, cybersecurity, privacy, compliance, and legal leaders.\u003Ca href=\"#source-9\" class=\"citation-link\" title=\"View source [9]\">[9]\u003C\u002Fa> Large firms should map this directly onto AI‑assisted research tooling.\u003C\u002Fp>\n\u003Cp>📊 \u003Cstrong>What governance must cover\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>Security experts argue “trustworthy AI” requires demonstrable processes, not just vendor assurances.\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003Ca href=\"#source-9\" class=\"citation-link\" title=\"View source [9]\">[9]\u003C\u002Fa> Policies for legal LLMs should define:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Approved data sources (official reporters, vetted internal repositories).\u003C\u002Fli>\n\u003Cli>Mandatory verification steps for any AI‑generated citation.\u003C\u002Fli>\n\u003Cli>Logging and retention for audit trails.\u003C\u002Fli>\n\u003Cli>Incident‑response playbooks when hallucinations reach clients or courts.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Education technology leaders demand data‑privacy agreements and staged approvals for classroom AI tools; bar regulators and courts are beginning to expect similar rigor from lawyers using generative AI.\u003Ca href=\"#source-8\" class=\"citation-link\" title=\"View source [8]\">[8]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>The Deloitte refund suggests future consulting and legal contracts will add AI‑usage clauses: disclosure duties, verification standards, and fee clawbacks when hallucinations taint work product.\u003Ca href=\"#source-11\" class=\"citation-link\" title=\"View source [11]\">[11]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>⚠️ \u003Cstrong>Humans stay on the hook\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cp>Multi‑layer hallucination frameworks stress that experts must remain in the loop for high‑stakes domains.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa> For lawyers, that implies:\u003C\u002Fp>\n\u003Cblockquote>\n\u003Cp>Every AI‑proposed citation is independently validated against primary sources before it appears in a filing.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\u003Cp>⚡ \u003Cstrong>Section takeaway:\u003C\u002Fstrong> engineering controls succeed only when backed by enforceable policy, contractual clarity, and explicit human responsibilities.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003Ca href=\"#source-8\" class=\"citation-link\" title=\"View source [8]\">[8]\u003C\u002Fa>\u003Ca href=\"#source-9\" class=\"citation-link\" title=\"View source [9]\">[9]\u003C\u002Fa>\u003Ca href=\"#source-10\" class=\"citation-link\" title=\"View source [10]\">[10]\u003C\u002Fa>\u003Ca href=\"#source-11\" class=\"citation-link\" title=\"View source [11]\">[11]\u003C\u002Fa>\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>Conclusion: Turn LLMs from Liability to Legal Infrastructure\u003C\u002Fh2>\n\u003Cp>Sanctioned lawyers and embarrassed consultants share the same root cause: unconstrained generative models deployed in workflows that demand verifiable authority.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003Ca href=\"#source-11\" class=\"citation-link\" title=\"View source [11]\">[11]\u003C\u002Fa>\u003Ca href=\"#source-12\" class=\"citation-link\" title=\"View source [12]\">[12]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>The engineering response:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Build \u003Cstrong>retrieval‑first architectures\u003C\u002Fstrong> grounded in authoritative corpora.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Add \u003Cstrong>claim‑level grounding verification\u003C\u002Fstrong> to every citation‑bearing output.\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Run \u003Cstrong>metrics‑driven evaluation and adversarial red‑teaming\u003C\u002Fstrong> before production.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Wrap everything in \u003Cstrong>OWASP‑style governance and human‑in‑the‑loop review\u003C\u002Fstrong>.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003Ca href=\"#source-8\" class=\"citation-link\" title=\"View source [8]\">[8]\u003C\u002Fa>\u003Ca href=\"#source-9\" class=\"citation-link\" title=\"View source [9]\">[9]\u003C\u002Fa>\u003Ca href=\"#source-10\" class=\"citation-link\" title=\"View source [10]\">[10]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>If you design LLM tools for lawyers, start by defining your citation‑verification guarantees, then work backward: choose your retrieval corpus, build claim‑checking pipelines, and enforce human review standards that align with courts, regulators, and clients.\u003C\u002Fp>\n","When a New York lawyer was fined for filing a brief full of non‑existent cases generated by ChatGPT, it showed a deeper issue: unconstrained generative models are being dropped into workflows that ass...","hallucinations",[],1495,7,"2026-04-17T08:30:56.265Z",[17,22,26,30,34,38,42,46,50,54],{"title":18,"url":19,"summary":20,"type":21},"Multi-Layered Framework for LLM Hallucination Mitigation in High-Stakes Applications: A Tutorial","https:\u002F\u002Fwww.mdpi.com\u002F2073-431X\u002F14\u002F8\u002F332","Multi-Layered Framework for LLM Hallucination Mitigation in High-Stakes Applications: A Tutorial\n\n by \n\n Sachin Hiriyanna\n\nSachin Hiriyanna\n\n[SciProfiles](https:\u002F\u002Fsciprofiles.com\u002Fprofile\u002F4613284?utm_s...","kb",{"title":23,"url":24,"summary":25,"type":21},"How to Create Hallucination Detection","https:\u002F\u002Foneuptime.com\u002Fblog\u002Fpost\u002F2026-01-30-hallucination-detection\u002Fview","Large Language Models are powerful, but they have a critical flaw: they can confidently generate information that sounds plausible but is completely wrong. These \"hallucinations\" can erode user trust,...",{"title":27,"url":28,"summary":29,"type":21},"Reducing Hallucinations and Evaluating LLMs for Production - Divyansh Chaurasia, Deepchecks","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=unnqhKmMo68","Reducing Hallucinations and Evaluating LLMs for Production - Divyansh Chaurasia, Deepchecks\n\nThis talk focuses on the challenges associated with evaluating LLMs and hallucinations in the LLM outputs. ...",{"title":31,"url":32,"summary":33,"type":21},"Mitigating LLM Hallucinations with a Metrics-First Evaluation Framework","https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=u1pNrsR1txA","Mitigating LLM Hallucinations with a Metrics-First Evaluation Framework\n\nJoin in on this workshop where we will showcase some powerful metrics to evaluate the quality of the inputs and outputs with a ...",{"title":35,"url":36,"summary":37,"type":21},"LLM Security: Shield Your AI from Injection Attacks, Data Leaks, and Model Theft","https:\u002F\u002Fkonghq.com\u002Fblog\u002Fenterprise\u002Fllm-security-playbook-for-injection-attacks-data-leaks-model-theft","May 19, 2025\n\nKong\n\nThis comprehensive guide will arm you with the knowledge and strategies needed to protect your LLMs from emerging threats. We’ll explore the OWASP LLM Top 10 vulnerabilities in det...",{"title":39,"url":40,"summary":41,"type":21},"Red Teaming LLM Applications with DeepTeam: A Production Implementation Guide | Vadim's blog","https:\u002F\u002Fvadim.blog\u002Fred-teaming-llm-applications-deepteam-guide","Red Teaming LLM Applications with DeepTeam: A Production Implementation Guide | Vadim's blog\n\n[Skip to main content](https:\u002F\u002Fvadim.blog\u002Fred-teaming-llm-applications-deepteam-guide#__docusaurus_skipToC...",{"title":43,"url":44,"summary":45,"type":21},"AI breakthrough cuts energy use by 100x while boosting accuracy","https:\u002F\u002Fwww.sciencedaily.com\u002Freleases\u002F2026\u002F04\u002F260405003952.htm","Artificial intelligence is consuming enormous amounts of electricity in the United States. According to the International Energy Agency, AI systems and data centers used about 415 terawatt hours of po...",{"title":47,"url":48,"summary":49,"type":21},"TCEA 2026: Practical Guidance for AI Preparedness in K–12 Education","https:\u002F\u002Fedtechmagazine.com\u002Fk12\u002Farticle\u002F2026\u002F02\u002Ftcea-2026-practical-guidance-ai-preparedness-k-12-education","Practical use of artificial intelligence in K–12 environments was a major area of focus at TCEA 2026 in San Antonio.\n\nData Privacy and Security Can Never Be Assumed\n\nJaDorian Richardson, Instructional...",{"title":51,"url":52,"summary":53,"type":21},"OWASP's LLM AI Security & Governance Checklist: 13 action items for your team","https:\u002F\u002Fwww.reversinglabs.com\u002Fblog\u002Fowasp-llm-ai-security-governance-checklist-13-action-items-for-your-team","John P. Mello Jr., Freelance technology writer.\n\nArtificial intelligence is developing at a dizzying pace. And if it's dizzying for people in the field, it's even more so for those outside it, especia...",{"title":55,"url":56,"summary":57,"type":21},"White House AI Framework Proposes Industry-Friendly Legislation | Lawfare","https:\u002F\u002Fwww.lawfaremedia.org\u002Farticle\u002Fwhite-house-ai-framework-proposes-industry-friendly-legislation","On March 20, the White House released a “comprehensive” national framework for artificial intelligence (AI), three months after calling for legislative recommendations on the technology in an executiv...",null,{"generationDuration":60,"kbQueriesCount":61,"confidenceScore":62,"sourcesCount":63},337118,12,100,10,{"metaTitle":6,"metaDescription":10},"en","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1620309163422-5f1c07fda0c3?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxfHxsYXd5ZXJzJTIwZ290JTIwc2FuY3Rpb25lZCUyMGhhbGx1Y2luYXRpb25zfGVufDF8MHx8fDE3NzY0MTQ2NTZ8MA&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60",{"photographerName":68,"photographerUrl":69,"unsplashUrl":70},"Annie Spratt","https:\u002F\u002Funsplash.com\u002F@anniespratt?utm_source=coreprose&utm_medium=referral","https:\u002F\u002Funsplash.com\u002Fphotos\u002Fwhite-and-blue-paper-on-black-background-QvC3_skiAJU?utm_source=coreprose&utm_medium=referral",false,{"key":73,"name":74,"nameEn":74},"ai-engineering","AI Engineering & LLM Ops",[76,84,91,98],{"id":77,"title":78,"slug":79,"excerpt":80,"category":81,"featuredImage":82,"publishedAt":83},"69e20d60875ee5b165b83e6d","AI in the Legal Department: How General Counsel Can Cut Litigation and Compliance Risk Without Halting Innovation","ai-in-the-legal-department-how-general-counsel-can-cut-litigation-and-compliance-risk-without-haltin","Generative AI is already writing emails, summarizing data rooms, and drafting contract language—often without legal’s knowledge. Courts are sanctioning lawyers for AI‑fabricated case law and treating...","safety","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1768839719921-6a554fb3e847?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxfHxsZWdhbCUyMGRlcGFydG1lbnQlMjBnZW5lcmFsJTIwY291bnNlbHxlbnwxfDB8fHwxNzc2NDIyNzQ0fDA&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60","2026-04-17T10:45:44.116Z",{"id":85,"title":86,"slug":87,"excerpt":88,"category":81,"featuredImage":89,"publishedAt":90},"69e1f18ce5fef93dd5f0f534","How General Counsel Can Tame AI Litigation and Compliance Risk","how-general-counsel-can-tame-ai-litigation-and-compliance-risk","In‑house legal teams are watching AI experiments turn into core infrastructure before guardrails are settled. Vendors sell “hallucination‑free” copilots while courts sanction lawyers for fake citation...","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1772096168169-1b69984d2cfc?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxfHxnZW5lcmFsJTIwY291bnNlbCUyMHRhbWUlMjBsaXRpZ2F0aW9ufGVufDF8MHx8fDE3NzY0MTU0ODV8MA&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60","2026-04-17T08:44:44.891Z",{"id":92,"title":93,"slug":94,"excerpt":95,"category":81,"featuredImage":96,"publishedAt":97},"69e1e205292a31548fe95028","How General Counsel Can Cut AI Litigation and Compliance Risk Without Blocking Innovation","how-general-counsel-can-cut-ai-litigation-and-compliance-risk-without-blocking-innovation","AI is spreading across CRMs, HR tools, marketing platforms, and vendor products faster than legal teams can track, while regulators demand structured oversight and documentation.[9][10]  \n\nFor general...","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1630265927428-a62b061a5270?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxfHxnZW5lcmFsJTIwY291bnNlbCUyMGN1dCUyMGxpdGlnYXRpb258ZW58MXwwfHx8MTc3NjQxMTU0Mnww&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60","2026-04-17T07:39:01.709Z",{"id":99,"title":100,"slug":101,"excerpt":102,"category":81,"featuredImage":103,"publishedAt":104},"69e1c602e466c0c9ae2322cc","AI Governance for General Counsel: How to Cut Litigation and Compliance Risk Without Stopping Innovation","ai-governance-for-general-counsel-how-to-cut-litigation-and-compliance-risk-without-stopping-innovat","General counsel now must approve AI systems that affect millions of customers and vast data stores, while regulators, courts, and attackers already treat those systems as critical infrastructure.[2][5...","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1614610741234-6ad255244a3b?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxfHxnb3Zlcm5hbmNlJTIwZ2VuZXJhbCUyMGNvdW5zZWwlMjBjdXR8ZW58MXwwfHx8MTc3NjQwNDM3M3ww&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60","2026-04-17T05:39:32.993Z",["Island",106],{"key":107,"params":108,"result":110},"ArticleBody_cVXoENI4Ex4BW5YnqawFJynjRn5p1PVGktPRN0ieqk",{"props":109},"{\"articleId\":\"69e1e509292a31548fe951c7\",\"linkColor\":\"red\"}",{"head":111},{}]