[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"kb-article-ai-hallucination-in-military-targeting-risks-ethics-and-a-safe-by-design-blueprint-en":3,"ArticleBody_qJ9dZjqepNJ7z46Jzc1JOYVu5Yx4VnJLpRWuTWMTzc":91},{"article":4,"relatedArticles":60,"locale":50},{"id":5,"title":6,"slug":7,"content":8,"htmlContent":9,"excerpt":10,"category":11,"tags":12,"metaDescription":10,"wordCount":13,"readingTime":14,"publishedAt":15,"sources":16,"sourceCoverage":42,"transparency":43,"seo":47,"language":50,"featuredImage":51,"featuredImageCredit":52,"isFreeGeneration":56,"trendSlug":42,"niche":57,"geoTakeaways":42,"geoFaq":42,"entities":42},"69b2f35fcd7f21484341204e","AI Hallucination in Military Targeting: Risks, Ethics, and a Safe-by-Design Blueprint","ai-hallucination-in-military-targeting-risks-ethics-and-a-safe-by-design-blueprint","## Introduction\n\nWhen an AI model hallucinates in a customer chatbot, the damage is usually limited to reputation, trust, and compliance. In a military targeting system, the same behavior can misidentify civilians, justify unlawful strikes, or trigger escalation.\n\nDemocratic states already use AI for intelligence, sensor processing, and decision support. These systems sit close to lethal decision chains while their core failure mode—plausible but false output—remains weakly controlled. Hallucinations are a structural risk, not a cosmetic bug.\n\nThe issue is no longer *whether* militaries will use AI, but *how* they can do so without undermining international humanitarian law, civil liberties, and domestic legitimacy. That demands architectures, doctrines, and governance that assume AI will sometimes be confidently wrong.\n\nThis article offers a practical blueprint: where hallucinations arise, how they interact with targeting workflows, what ethical and legal pitfalls they create, and how democratic states can build “safe-by-design” systems that augment, rather than replace, human judgment over the use of force.\n\n---\n\n## 1. Strategic Context: Why Hallucinations Matter in Military Targeting\n\nAdvanced AI is already embedded in security and defense:\n\n- France’s domestic intelligence service renewed its contract with Palantir to process large, heterogeneous data, while building a sovereign platform (OTDH) to regain control over sensitive capabilities and data flows.[5]  \n- US defense agencies adopt powerful models from private vendors, even as leading actors such as Anthropic refuse to support fully autonomous lethal weapons or mass domestic surveillance, arguing current systems are too unreliable for deadly force without human supervision.[1]\n\nIn parallel:\n\n- Cyber threat intelligence teams face huge data volumes—thousands of new malware samples daily and zero-days weaponized in under 24 hours—driving reliance on AI for triage and interpretation.[2]  \n- This mirrors military sensor fusion and targeting: massive volume, time pressure, and uncertainty.\n\nEnterprise deployments show what happens when hallucinations enter workflows:\n\n- Confident but false answers cause compliance violations, reputational damage, and operational disruption.[6]  \n- In kinetic environments, the same behavior can fabricate hostile activity, misidentify combatants, or provide spurious corroboration for a strike.\n\n⚠️ **Strategic implication:** For democratic governments, delegating lethal authority to systems that can fabricate facts conflicts with accountability, discrimination, and proportionality.[1][5]\n\nHallucinations are thus a strategic and political fault line. Any AI near targeting decisions must be treated as a high-risk component whose failures can reverberate through diplomacy, public trust, and long-term stability.\n\n---\n\n## 2. Understanding AI Hallucinations in a Targeting Context\n\nLanguage models are not trained to know when they are ignorant:\n\n- They are rewarded for producing plausible continuations of text, not for admitting “I don’t know.”[3]  \n- Hallucinations emerge from optimizing for fluency and apparent correctness, not from a simple glitch.\n\nThis creates “high-performing bluffers”:\n\n- Models sound right under pressure even when uncertainty is high.  \n- In customer support, this yields polished nonsense; in intelligence analysis, it can yield confident but unfounded interpretations of reconnaissance, intercepts, or pattern-of-life data.[3]\n\nEnterprise evidence shows:\n\n- Hallucinated content is usually delivered with stylistic confidence, making it persuasive enough to induce legal or compliance errors when users relax their skepticism.[6]  \n- In a targeting cell under time pressure, similar confidence can short-circuit doubt.\n\n💡 **Key insight:** In high-stakes environments, *overconfident wrongness* is more dangerous than visible uncertainty.[3][6]\n\nRisk is amplified in automated pipelines:\n\n- Cyber threat intelligence platforms chain collection, enrichment, correlation, and dissemination; a false early inference can be enriched and recirculated as “fact.”[2]  \n- Military architectures that fuse ISR, SIGINT, and open-source intelligence face the same cascading error risk.\n\nMain failure modes in targeting contexts include:\n\n- Fabricating hostile activity in noisy or ambiguous sensor data  \n- Overconfidently classifying dual-use or civilian infrastructure as legitimate targets  \n- Hallucinating links between communications and hostile networks, creating illusory “patterns” of threat[2][3]\n\nThese errors can be subtle, plausible, and hard to challenge in real time. Seeing hallucinations as socio-technical—shaped by training incentives and evaluation culture—shows why technical fixes alone are insufficient without changes in design, deployment, and supervision.\n\n---\n\n## 3. Ethical, Legal, and Democratic Risks of Hallucinating Targeting Systems\n\nAnthropic’s refusal to support fully autonomous lethal weapons is grounded in:\n\n- The view that current AI is too unreliable for kill decisions without ultimate human oversight  \n- The belief that such use is incompatible with democratic values and civilian protection[1]\n\nThe same firm rejects enabling mass domestic surveillance, warning it would conflict with democratic norms.[1] Hallucinating surveillance systems could:\n\n- Wrongly flag citizens as extremists or foreign agents  \n- Entrench unjust watchlists and disproportionate policing at scale\n\nBusiness environments already show hallucination-driven regulatory breaches:\n\n- Inaccurate personal data conflicts with accuracy requirements  \n- Incorrect legal guidance leads to non-compliant actions[6]\n\nIn a military theater, similar misrepresentations could cause:\n\n- Wrong classification of individuals as combatants  \n- Faulty attribution of attacks to groups or states  \n- Misjudged proportionality based on fabricated or distorted evidence[6]\n\n📊 **Compliance parallel:** What is a GDPR violation in commerce can become a war crime when misclassification results in unlawful targeting, not misaddressed marketing.[6]\n\nReliance on foreign AI platforms adds governance tensions:\n\n- The French DGSI’s continued use of Palantir, despite concerns about US Cloud Act exposure, shows the fragility of sovereignty when critical security data flows through foreign providers.[5]  \n- In targeting, such dependencies can complicate accountability, evidence chains, and protection of classified information.\n\nCombined—model unreliability, opaque transnational data flows, and lethal stakes—this creates systemic risk:\n\n- A hallucinated threat, amplified by a black-box supply chain, could trigger wrongful strikes, diplomatic crises, and long-term erosion of public trust in armed forces and democratic oversight.[1][5][6]\n\n---\n\n## 4. Technical Safeguards: Guardrails, Alignment, and Uncertainty-Aware Design\n\nTechnical safeguards against hallucinations operate on two layers:\n\n- **Guardrails:** External filters that intercept or transform harmful inputs\u002Foutputs based on policies[4]  \n- **Alignment:** Methods like RLHF or constitutional AI that embed safety preferences in model behavior[4]\n\nBoth are necessary but imperfect. Guardrails face a trade-off:\n\n- **False positives:** Overblocking legitimate content or workflows  \n- **False negatives:** Missing genuinely dangerous content[4]\n\nFor military targeting, this must be tuned carefully: protect civilians and legal constraints while remaining usable for time-critical decisions.\n\nResearch on hallucinations indicates:\n\n- Reliability gains require changing evaluation and reward structures  \n- Metrics should value calibrated uncertainty and willingness to say “I do not know,” not just benchmark scores that reward confident guessing.[3]\n\n💼 **Enterprise practice:** Organizations mitigate hallucinations via:\n\n- Retrieval-augmented generation (RAG)  \n- Enforced source citations  \n- Human validation workflows for any process relying on AI outputs[6]\n\nThese practices are a baseline for stricter military adaptations.\n\nA safe-by-design targeting architecture should include:\n\n- **Uncertainty-aware models** trained and evaluated on recognizing and communicating doubt[3]  \n- **Mission-specific guardrails** that block direct target designation or engagement commands[4]  \n- **Mandatory human verification loops** so AI recommendations never become binding without documented human review[6]  \n- **Independent logging and audit layers** capturing prompts, outputs, and decision traces for after-action review and legal scrutiny[4][6]\n\n⚡ **Design principle:** Treat every AI component near the kill chain like a safety-critical aviation subsystem: observable, auditable, and engineered to fail safely rather than confidently wrong.\n\n---\n\n## 5. Operational Architecture: From Sensor Data to Human-in-the-Loop Decisions\n\nCyber threat intelligence platforms provide a conceptual template:\n\n- They orchestrate automated collection, enrichment, analysis, and dissemination across heterogeneous sources  \n- AI handles volume and complexity while humans retain analytical authority[2]\n\nAn AI-enabled targeting pipeline will:\n\n- Ingest ISR video, radar\u002Finfrared, SIGINT, open-source intelligence, and mission reports  \n- Resemble the heterogeneous data flows handled by Palantir and the planned French OTDH system[2][5]\n\nIn this environment, data governance, provenance tracking, and access control are as critical as model performance.\n\nLLMs or multimodal models should be constrained to *supportive* roles, such as:\n\n- Summarizing multi-source intelligence for commanders  \n- Proposing hypotheses about adversary behavior or intent  \n- Highlighting anomalies or inconsistencies across data streams[2][6]\n\nThey should *not* issue binding target designations or fire-authority recommendations. This containment limits the operational “blast radius” of hallucinated inferences.\n\nHuman operators—analysts, legal advisors, commanders—must remain final arbiters over kinetic force, echoing responsible vendors’ stance that lethal decisions cannot be safely delegated to current systems.[1] But human control must be meaningful, not symbolic.\n\n💡 **Human-in-the-loop, not human-on-the-loop:** Interfaces must enable interrogation of AI outputs, not passive rubber-stamping.\n\nInterfaces should:\n\n- Surface model confidence scores and, where possible, calibrated uncertainty bands  \n- Reveal underlying evidence, including which sensors or sources informed each conclusion  \n- Clearly flag when outputs rely on extrapolation or pattern completion rather than retrieved facts[3][6]\n\nThese choices implement the shift advocated by hallucination research: from systems that always answer to systems that know when to stop and defer.[3] Combined with procedural safeguards, they help ensure AI augments rather than displaces human control over lethal outcomes.\n\n---\n\n## 6. Governance, Policy, and Capability Roadmap for Democratic States\n\nTechnical solutions need coherent governance and policy.\n\nDefense procurement already recognizes sovereignty for critical data and AI:\n\n- France’s move from Palantir to a national OTDH platform reflects the view that systems handling sensitive intelligence must be domestically governed.[5]  \n- This sovereignty mindset should extend to any AI used in targeting.\n\nDemocratic states can codify red lines, aligned with responsible AI vendors:\n\n- Prohibit fully autonomous lethal weapons and mass domestic surveillance  \n- Mandate meaningful human control, traceability, and review for AI-supported targeting decisions[1]\n\nHallucination risk management policies should draw on enterprise best practices:\n\n- Treat AI outputs as unverified suggestions  \n- Require corroboration for high-impact decisions  \n- Establish escalation paths when AI and human assessments diverge[6]\n\nIn defense, these must be hardened through:\n\n- Certification regimes for safety-critical AI components  \n- Independent testing and evaluation, including red-teaming against hallucination scenarios  \n- Legal review embedded in doctrine and rules of engagement\n\n📊 **Metric shift:** Regulators and research agencies should adjust benchmarks to emphasize *calibrated uncertainty* and quality of deferrals, not just accuracy or task completion.[3] This incentivizes “humble” AI that enhances human decision-making.\n\nCyber threat intelligence and information operations are living laboratories:\n\n- They already face fast-moving, AI-shaped threats  \n- They experiment with guardrails, alignment, and uncertainty-aware workflows[2][4]\n\nLessons from non-kinetic operations—structuring human oversight, logging for attribution, auditing AI-assisted analysis—can be hardened before extending similar approaches to kinetic targeting.\n\n⚠️ **Roadmap imperative:** The goal is not to exclude AI from defense, but to steer its integration so it reinforces, rather than corrodes, democratic control over military power.\n\n---\n\n## Conclusion: From Convincing Bluffers to Cautious Partners\n\nAI hallucinations are a structural byproduct of how today’s models are trained and evaluated. We have rewarded systems for being convincing bluffers, not reliably cautious partners.[3] In commercial settings, this already causes reputational harm, compliance exposure, and operational drag.[6] In military targeting, the same behavior threatens civilians, escalation control, and the legitimacy of democratic armed forces.\n\nA responsible trajectory for democratic states rests on:\n\n- Clear red lines against fully autonomous lethal use and mass domestic surveillance[1]  \n- Sovereign, auditable infrastructures for sensitive data and targeting-related AI[5]  \n- Uncertainty-aware model design that values calibrated doubt over ungrounded confidence[3]  \n- Stringent guardrails and role constraints that keep AI outputs advisory, not determinative[4][6]  \n- Genuine human authority, backed by transparent interfaces, logging, and legal accountability\n\nThe question is not when AI will be “good enough” to replace humans in targeting, but how it can safely augment human judgment without ever hallucinating its way into pulling the trigger.\n\nUse this blueprint to audit AI-for-targeting initiatives: map where hallucinations could emerge, how errors might propagate through data pipelines, where humans truly retain control, and how procurement, testing, and rules of engagement must evolve. The window to embed safety, humility, and democratic oversight into military AI is open now—before hallucinations migrate from documents and dashboards into real-world battlefields.","\u003Ch2>Introduction\u003C\u002Fh2>\n\u003Cp>When an AI model hallucinates in a customer chatbot, the damage is usually limited to reputation, trust, and compliance. In a military targeting system, the same behavior can misidentify civilians, justify unlawful strikes, or trigger escalation.\u003C\u002Fp>\n\u003Cp>Democratic states already use AI for intelligence, sensor processing, and decision support. These systems sit close to lethal decision chains while their core failure mode—plausible but false output—remains weakly controlled. Hallucinations are a structural risk, not a cosmetic bug.\u003C\u002Fp>\n\u003Cp>The issue is no longer \u003Cem>whether\u003C\u002Fem> militaries will use AI, but \u003Cem>how\u003C\u002Fem> they can do so without undermining international humanitarian law, civil liberties, and domestic legitimacy. That demands architectures, doctrines, and governance that assume AI will sometimes be confidently wrong.\u003C\u002Fp>\n\u003Cp>This article offers a practical blueprint: where hallucinations arise, how they interact with targeting workflows, what ethical and legal pitfalls they create, and how democratic states can build “safe-by-design” systems that augment, rather than replace, human judgment over the use of force.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>1. Strategic Context: Why Hallucinations Matter in Military Targeting\u003C\u002Fh2>\n\u003Cp>Advanced AI is already embedded in security and defense:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>France’s domestic intelligence service renewed its contract with Palantir to process large, heterogeneous data, while building a sovereign platform (OTDH) to regain control over sensitive capabilities and data flows.\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>US defense agencies adopt powerful models from private vendors, even as leading actors such as Anthropic refuse to support fully autonomous lethal weapons or mass domestic surveillance, arguing current systems are too unreliable for deadly force without human supervision.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>In parallel:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Cyber threat intelligence teams face huge data volumes—thousands of new malware samples daily and zero-days weaponized in under 24 hours—driving reliance on AI for triage and interpretation.\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>This mirrors military sensor fusion and targeting: massive volume, time pressure, and uncertainty.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Enterprise deployments show what happens when hallucinations enter workflows:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Confident but false answers cause compliance violations, reputational damage, and operational disruption.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>In kinetic environments, the same behavior can fabricate hostile activity, misidentify combatants, or provide spurious corroboration for a strike.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>⚠️ \u003Cstrong>Strategic implication:\u003C\u002Fstrong> For democratic governments, delegating lethal authority to systems that can fabricate facts conflicts with accountability, discrimination, and proportionality.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>Hallucinations are thus a strategic and political fault line. Any AI near targeting decisions must be treated as a high-risk component whose failures can reverberate through diplomacy, public trust, and long-term stability.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>2. Understanding AI Hallucinations in a Targeting Context\u003C\u002Fh2>\n\u003Cp>Language models are not trained to know when they are ignorant:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>They are rewarded for producing plausible continuations of text, not for admitting “I don’t know.”\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Hallucinations emerge from optimizing for fluency and apparent correctness, not from a simple glitch.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>This creates “high-performing bluffers”:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Models sound right under pressure even when uncertainty is high.\u003C\u002Fli>\n\u003Cli>In customer support, this yields polished nonsense; in intelligence analysis, it can yield confident but unfounded interpretations of reconnaissance, intercepts, or pattern-of-life data.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Enterprise evidence shows:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Hallucinated content is usually delivered with stylistic confidence, making it persuasive enough to induce legal or compliance errors when users relax their skepticism.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>In a targeting cell under time pressure, similar confidence can short-circuit doubt.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>💡 \u003Cstrong>Key insight:\u003C\u002Fstrong> In high-stakes environments, \u003Cem>overconfident wrongness\u003C\u002Fem> is more dangerous than visible uncertainty.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>Risk is amplified in automated pipelines:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Cyber threat intelligence platforms chain collection, enrichment, correlation, and dissemination; a false early inference can be enriched and recirculated as “fact.”\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Military architectures that fuse ISR, SIGINT, and open-source intelligence face the same cascading error risk.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Main failure modes in targeting contexts include:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Fabricating hostile activity in noisy or ambiguous sensor data\u003C\u002Fli>\n\u003Cli>Overconfidently classifying dual-use or civilian infrastructure as legitimate targets\u003C\u002Fli>\n\u003Cli>Hallucinating links between communications and hostile networks, creating illusory “patterns” of threat\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>These errors can be subtle, plausible, and hard to challenge in real time. Seeing hallucinations as socio-technical—shaped by training incentives and evaluation culture—shows why technical fixes alone are insufficient without changes in design, deployment, and supervision.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>3. Ethical, Legal, and Democratic Risks of Hallucinating Targeting Systems\u003C\u002Fh2>\n\u003Cp>Anthropic’s refusal to support fully autonomous lethal weapons is grounded in:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>The view that current AI is too unreliable for kill decisions without ultimate human oversight\u003C\u002Fli>\n\u003Cli>The belief that such use is incompatible with democratic values and civilian protection\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>The same firm rejects enabling mass domestic surveillance, warning it would conflict with democratic norms.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa> Hallucinating surveillance systems could:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Wrongly flag citizens as extremists or foreign agents\u003C\u002Fli>\n\u003Cli>Entrench unjust watchlists and disproportionate policing at scale\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Business environments already show hallucination-driven regulatory breaches:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Inaccurate personal data conflicts with accuracy requirements\u003C\u002Fli>\n\u003Cli>Incorrect legal guidance leads to non-compliant actions\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>In a military theater, similar misrepresentations could cause:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Wrong classification of individuals as combatants\u003C\u002Fli>\n\u003Cli>Faulty attribution of attacks to groups or states\u003C\u002Fli>\n\u003Cli>Misjudged proportionality based on fabricated or distorted evidence\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>📊 \u003Cstrong>Compliance parallel:\u003C\u002Fstrong> What is a GDPR violation in commerce can become a war crime when misclassification results in unlawful targeting, not misaddressed marketing.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fp>\n\u003Cp>Reliance on foreign AI platforms adds governance tensions:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>The French DGSI’s continued use of Palantir, despite concerns about US Cloud Act exposure, shows the fragility of sovereignty when critical security data flows through foreign providers.\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>In targeting, such dependencies can complicate accountability, evidence chains, and protection of classified information.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Combined—model unreliability, opaque transnational data flows, and lethal stakes—this creates systemic risk:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>A hallucinated threat, amplified by a black-box supply chain, could trigger wrongful strikes, diplomatic crises, and long-term erosion of public trust in armed forces and democratic oversight.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Chr>\n\u003Ch2>4. Technical Safeguards: Guardrails, Alignment, and Uncertainty-Aware Design\u003C\u002Fh2>\n\u003Cp>Technical safeguards against hallucinations operate on two layers:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>\u003Cstrong>Guardrails:\u003C\u002Fstrong> External filters that intercept or transform harmful inputs\u002Foutputs based on policies\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Alignment:\u003C\u002Fstrong> Methods like RLHF or constitutional AI that embed safety preferences in model behavior\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Both are necessary but imperfect. Guardrails face a trade-off:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>\u003Cstrong>False positives:\u003C\u002Fstrong> Overblocking legitimate content or workflows\u003C\u002Fli>\n\u003Cli>\u003Cstrong>False negatives:\u003C\u002Fstrong> Missing genuinely dangerous content\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>For military targeting, this must be tuned carefully: protect civilians and legal constraints while remaining usable for time-critical decisions.\u003C\u002Fp>\n\u003Cp>Research on hallucinations indicates:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Reliability gains require changing evaluation and reward structures\u003C\u002Fli>\n\u003Cli>Metrics should value calibrated uncertainty and willingness to say “I do not know,” not just benchmark scores that reward confident guessing.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>💼 \u003Cstrong>Enterprise practice:\u003C\u002Fstrong> Organizations mitigate hallucinations via:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Retrieval-augmented generation (RAG)\u003C\u002Fli>\n\u003Cli>Enforced source citations\u003C\u002Fli>\n\u003Cli>Human validation workflows for any process relying on AI outputs\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>These practices are a baseline for stricter military adaptations.\u003C\u002Fp>\n\u003Cp>A safe-by-design targeting architecture should include:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>\u003Cstrong>Uncertainty-aware models\u003C\u002Fstrong> trained and evaluated on recognizing and communicating doubt\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Mission-specific guardrails\u003C\u002Fstrong> that block direct target designation or engagement commands\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Mandatory human verification loops\u003C\u002Fstrong> so AI recommendations never become binding without documented human review\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Independent logging and audit layers\u003C\u002Fstrong> capturing prompts, outputs, and decision traces for after-action review and legal scrutiny\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>⚡ \u003Cstrong>Design principle:\u003C\u002Fstrong> Treat every AI component near the kill chain like a safety-critical aviation subsystem: observable, auditable, and engineered to fail safely rather than confidently wrong.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>5. Operational Architecture: From Sensor Data to Human-in-the-Loop Decisions\u003C\u002Fh2>\n\u003Cp>Cyber threat intelligence platforms provide a conceptual template:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>They orchestrate automated collection, enrichment, analysis, and dissemination across heterogeneous sources\u003C\u002Fli>\n\u003Cli>AI handles volume and complexity while humans retain analytical authority\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>An AI-enabled targeting pipeline will:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Ingest ISR video, radar\u002Finfrared, SIGINT, open-source intelligence, and mission reports\u003C\u002Fli>\n\u003Cli>Resemble the heterogeneous data flows handled by Palantir and the planned French OTDH system\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>In this environment, data governance, provenance tracking, and access control are as critical as model performance.\u003C\u002Fp>\n\u003Cp>LLMs or multimodal models should be constrained to \u003Cem>supportive\u003C\u002Fem> roles, such as:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Summarizing multi-source intelligence for commanders\u003C\u002Fli>\n\u003Cli>Proposing hypotheses about adversary behavior or intent\u003C\u002Fli>\n\u003Cli>Highlighting anomalies or inconsistencies across data streams\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>They should \u003Cem>not\u003C\u002Fem> issue binding target designations or fire-authority recommendations. This containment limits the operational “blast radius” of hallucinated inferences.\u003C\u002Fp>\n\u003Cp>Human operators—analysts, legal advisors, commanders—must remain final arbiters over kinetic force, echoing responsible vendors’ stance that lethal decisions cannot be safely delegated to current systems.\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa> But human control must be meaningful, not symbolic.\u003C\u002Fp>\n\u003Cp>💡 \u003Cstrong>Human-in-the-loop, not human-on-the-loop:\u003C\u002Fstrong> Interfaces must enable interrogation of AI outputs, not passive rubber-stamping.\u003C\u002Fp>\n\u003Cp>Interfaces should:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Surface model confidence scores and, where possible, calibrated uncertainty bands\u003C\u002Fli>\n\u003Cli>Reveal underlying evidence, including which sensors or sources informed each conclusion\u003C\u002Fli>\n\u003Cli>Clearly flag when outputs rely on extrapolation or pattern completion rather than retrieved facts\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>These choices implement the shift advocated by hallucination research: from systems that always answer to systems that know when to stop and defer.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa> Combined with procedural safeguards, they help ensure AI augments rather than displaces human control over lethal outcomes.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>6. Governance, Policy, and Capability Roadmap for Democratic States\u003C\u002Fh2>\n\u003Cp>Technical solutions need coherent governance and policy.\u003C\u002Fp>\n\u003Cp>Defense procurement already recognizes sovereignty for critical data and AI:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>France’s move from Palantir to a national OTDH platform reflects the view that systems handling sensitive intelligence must be domestically governed.\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>This sovereignty mindset should extend to any AI used in targeting.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Democratic states can codify red lines, aligned with responsible AI vendors:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Prohibit fully autonomous lethal weapons and mass domestic surveillance\u003C\u002Fli>\n\u003Cli>Mandate meaningful human control, traceability, and review for AI-supported targeting decisions\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Hallucination risk management policies should draw on enterprise best practices:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Treat AI outputs as unverified suggestions\u003C\u002Fli>\n\u003Cli>Require corroboration for high-impact decisions\u003C\u002Fli>\n\u003Cli>Establish escalation paths when AI and human assessments diverge\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>In defense, these must be hardened through:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Certification regimes for safety-critical AI components\u003C\u002Fli>\n\u003Cli>Independent testing and evaluation, including red-teaming against hallucination scenarios\u003C\u002Fli>\n\u003Cli>Legal review embedded in doctrine and rules of engagement\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>📊 \u003Cstrong>Metric shift:\u003C\u002Fstrong> Regulators and research agencies should adjust benchmarks to emphasize \u003Cem>calibrated uncertainty\u003C\u002Fem> and quality of deferrals, not just accuracy or task completion.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa> This incentivizes “humble” AI that enhances human decision-making.\u003C\u002Fp>\n\u003Cp>Cyber threat intelligence and information operations are living laboratories:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>They already face fast-moving, AI-shaped threats\u003C\u002Fli>\n\u003Cli>They experiment with guardrails, alignment, and uncertainty-aware workflows\u003Ca href=\"#source-2\" class=\"citation-link\" title=\"View source [2]\">[2]\u003C\u002Fa>\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Lessons from non-kinetic operations—structuring human oversight, logging for attribution, auditing AI-assisted analysis—can be hardened before extending similar approaches to kinetic targeting.\u003C\u002Fp>\n\u003Cp>⚠️ \u003Cstrong>Roadmap imperative:\u003C\u002Fstrong> The goal is not to exclude AI from defense, but to steer its integration so it reinforces, rather than corrodes, democratic control over military power.\u003C\u002Fp>\n\u003Chr>\n\u003Ch2>Conclusion: From Convincing Bluffers to Cautious Partners\u003C\u002Fh2>\n\u003Cp>AI hallucinations are a structural byproduct of how today’s models are trained and evaluated. We have rewarded systems for being convincing bluffers, not reliably cautious partners.\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa> In commercial settings, this already causes reputational harm, compliance exposure, and operational drag.\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa> In military targeting, the same behavior threatens civilians, escalation control, and the legitimacy of democratic armed forces.\u003C\u002Fp>\n\u003Cp>A responsible trajectory for democratic states rests on:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Clear red lines against fully autonomous lethal use and mass domestic surveillance\u003Ca href=\"#source-1\" class=\"citation-link\" title=\"View source [1]\">[1]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Sovereign, auditable infrastructures for sensitive data and targeting-related AI\u003Ca href=\"#source-5\" class=\"citation-link\" title=\"View source [5]\">[5]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Uncertainty-aware model design that values calibrated doubt over ungrounded confidence\u003Ca href=\"#source-3\" class=\"citation-link\" title=\"View source [3]\">[3]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Stringent guardrails and role constraints that keep AI outputs advisory, not determinative\u003Ca href=\"#source-4\" class=\"citation-link\" title=\"View source [4]\">[4]\u003C\u002Fa>\u003Ca href=\"#source-6\" class=\"citation-link\" title=\"View source [6]\">[6]\u003C\u002Fa>\u003C\u002Fli>\n\u003Cli>Genuine human authority, backed by transparent interfaces, logging, and legal accountability\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>The question is not when AI will be “good enough” to replace humans in targeting, but how it can safely augment human judgment without ever hallucinating its way into pulling the trigger.\u003C\u002Fp>\n\u003Cp>Use this blueprint to audit AI-for-targeting initiatives: map where hallucinations could emerge, how errors might propagate through data pipelines, where humans truly retain control, and how procurement, testing, and rules of engagement must evolve. The window to embed safety, humility, and democratic oversight into military AI is open now—before hallucinations migrate from documents and dashboards into real-world battlefields.\u003C\u002Fp>\n","Introduction\n\nWhen an AI model hallucinates in a customer chatbot, the damage is usually limited to reputation, trust, and compliance. In a military targeting system, the same behavior can misidentify...","hallucinations",[],1921,10,"2026-03-12T17:14:53.890Z",[17,22,26,30,34,38],{"title":18,"url":19,"summary":20,"type":21},"\"Incompatible avec les valeurs démocratiques\" : la start-up américaine Anthropic refuse à l'armée américaine une utilisation sans restriction de son IA","https:\u002F\u002Fwww.franceinfo.fr\u002Finternet\u002Fintelligence-artificielle\u002Fincompatible-avec-les-valeurs-democratiques-la-start-up-americaine-anthropic-refuse-a-l-armee-americaine-une-utilisation-sans-restriction-de-son-ia_7833287.html","La start-up californienne Anthropic a refusé, jeudi 26 février, d'accorder à l'armée américaine une utilisation sans restriction de son intelligence artificielle (IA), deux jours après l'ultimatum for...","kb",{"title":23,"url":24,"summary":25,"type":21},"Threat Intelligence Augmentée par IA | Ayi NEDJIMI","https:\u002F\u002Fwww.ayinedjimi-consultants.fr\u002Fia-threat-intelligence-augmentee.html","Threat Intelligence Augmentée par IA\n====================================\n\nEnrichir et automatiser le cycle de threat intelligence avec les LLM pour une anticipation proactive des menaces cyber\n\nAyi N...",{"title":27,"url":28,"summary":29,"type":21},"Signal faible : Why language models hallucinate","https:\u002F\u002Fwww.omnimodele.com\u002Fia\u002F","Le Papier : \"Why language models hallucinate\"\n\nL'Analyse : Ce papier explique que les hallucinations ne sont pas un bug, mais un comportement appris. Les LLMs sont entraînés comme des étudiants passan...",{"title":31,"url":32,"summary":33,"type":21},"Garde-fous des LLM: quelle efficacité? Étude comparative des performances de filtrage des LLM chez les leaders de la GenAI","https:\u002F\u002Funit42.paloaltonetworks.com\u002Ffr\u002Fcomparing-llm-guardrails-across-genai-platforms\u002F","Synthèse\n---------------------------------------------------------------------------------------------------\nNous avons mené une étude comparative des garde-fous intégrés à trois grandes plateformes d...",{"title":35,"url":36,"summary":37,"type":21},"Le ministère des Armées cherche à se doter de la capacité à analyser les flux vidéos grâce à l’intelligence artificielle","https:\u002F\u002Fwww.opex360.com\u002F2026\u002F02\u002F27\u002Fle-ministere-des-armees-cherche-a-se-doter-de-la-capacite-a-analyser-les-flux-videos-grace-a-lintelligence-artificielle\u002F","En décembre dernier, la Direction générale de la sécurité intérieure [DGSI] a reconduit pour trois ans de plus le contrat qu’elle avait notifié à l’entreprise américaine Palantir afin de disposer de s...",{"title":39,"url":40,"summary":41,"type":21},"Les hallucinations des modèles LLM : enjeux et stratégies pour les ETI en 2025","https:\u002F\u002Fwww.therevealinsightproject.com\u002Fblog\u002Fhallucinations-ia-enjeux-et-strategie-eti-2025","Écrit par Deborah Fassi\n\nContexte & Enjeux des hallucinations IA pour les Entreprises en 2025\n========================================================================\n\nEn 2025, l'intégration des Large...",null,{"generationDuration":44,"kbQueriesCount":45,"confidenceScore":46,"sourcesCount":45},184848,6,100,{"metaTitle":48,"metaDescription":49},"AI hallucination in military targeting: 7 critical risks","AI hallucinations in military targeting can mislead commanders and escalate conflicts. Learn risks, safeguards, and governance principles to deploy AI safely.","en","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1757258885972-3c111d1307f3?w=1200&h=630&fit=crop&crop=entropy&q=60&auto=format,compress",{"photographerName":53,"photographerUrl":54,"unsplashUrl":55},"Akbar Jawad","https:\u002F\u002Funsplash.com\u002F@akbarjawadd?utm_source=coreprose&utm_medium=referral","https:\u002F\u002Funsplash.com\u002Fphotos\u002Fsoldier-aiming-rifle-in-green-night-vision-RfmLAm-lPm0?utm_source=coreprose&utm_medium=referral",false,{"key":58,"name":59,"nameEn":59},"ai-engineering","AI Engineering & LLM Ops",[61,69,77,84],{"id":62,"title":63,"slug":64,"excerpt":65,"category":66,"featuredImage":67,"publishedAt":68},"69fc80447894807ad7bc3111","Cadence's ChipStack Mental Model: A New Blueprint for Agent-Driven Chip Design","cadence-s-chipstack-mental-model-a-new-blueprint-for-agent-driven-chip-design","From Human Intuition to ChipStack’s Mental Model\n\nModern AI-era SoCs are limited less by EDA speed than by how fast scarce verification talent can turn messy specs into solid RTL, testbenches, and clo...","trend-radar","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1564707944519-7a116ef3841c?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxNnx8YXJ0aWZpY2lhbCUyMGludGVsbGlnZW5jZSUyMHRlY2hub2xvZ3l8ZW58MXwwfHx8MTc3ODE1NTU4OHww&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60","2026-05-07T12:11:49.993Z",{"id":70,"title":71,"slug":72,"excerpt":73,"category":74,"featuredImage":75,"publishedAt":76},"69ec35c9e96ba002c5b857b0","Anthropic Claude Code npm Source Map Leak: When Packaging Turns into a Security Incident","anthropic-claude-code-npm-source-map-leak-when-packaging-turns-into-a-security-incident","When an AI coding tool’s minified JavaScript quietly ships its full TypeScript via npm source maps, it is not just leaking “how the product works.”  \n\nIt can expose:\n\n- Model orchestration logic  \n- A...","security","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1770278856325-e313d121ea16?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxNnx8Y3liZXJzZWN1cml0eSUyMHRlY2hub2xvZ3l8ZW58MXwwfHx8MTc3NzA4ODMyMXww&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60","2026-04-25T03:38:40.358Z",{"id":78,"title":79,"slug":80,"excerpt":81,"category":11,"featuredImage":82,"publishedAt":83},"69ea97b44d7939ebf3b76ac6","Lovable Vibe Coding Platform Exposes 48 Days of AI Prompts: Multi‑Tenant KV-Cache Failure and How to Fix It","lovable-vibe-coding-platform-exposes-48-days-of-ai-prompts-multi-tenant-kv-cache-failure-and-how-to-fix-it","From Product Darling to Incident Report: What Happened\n\nLovable Vibe was a “lovable” AI coding assistant inside IDE-like workflows.  \nIt powered:\n\n- Autocomplete, refactors, code reviews  \n- Chat over...","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1771942202908-6ce86ef73701?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxfHxsb3ZhYmxlJTIwdmliZSUyMGNvZGluZyUyMHBsYXRmb3JtfGVufDF8MHx8fDE3NzY5OTk3MTB8MA&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60","2026-04-23T22:12:17.628Z",{"id":85,"title":86,"slug":87,"excerpt":88,"category":11,"featuredImage":89,"publishedAt":90},"69ea7a6f29f0ff272d10c43b","Anthropic Mythos AI: Inside the ‘Too Dangerous’ Cybersecurity Model and What Engineers Must Do Next","anthropic-mythos-ai-inside-the-too-dangerous-cybersecurity-model-and-what-engineers-must-do-next","Anthropic’s Mythos is the first mainstream large language model whose creators publicly argued it was “too dangerous” to release, after internal tests showed it could autonomously surface thousands of...","https:\u002F\u002Fimages.unsplash.com\u002Fphoto-1728547874364-d5a7b7927c5b?ixid=M3w4OTczNDl8MHwxfHNlYXJjaHwxfHxhbnRocm9waWMlMjBteXRob3MlMjBpbnNpZGUlMjB0b298ZW58MXwwfHx8MTc3Njk3NjU3Nnww&ixlib=rb-4.1.0&w=1200&h=630&fit=crop&crop=entropy&auto=format,compress&q=60","2026-04-23T20:09:25.832Z",["Island",92],{"key":93,"params":94,"result":96},"ArticleBody_qJ9dZjqepNJ7z46Jzc1JOYVu5Yx4VnJLpRWuTWMTzc",{"props":95},"{\"articleId\":\"69b2f35fcd7f21484341204e\",\"linkColor\":\"red\"}",{"head":97},{}]