in ,

SAIC Exec Discusses Why LLMs Fall Short for Military, Intelligence Missions

Jay Meil, chief data scientist and vice president of strategic mission innovation at SAIC. Meil discusses the limits of LLMs.
Jay Meil Chief Data Scientist SAIC

Jay Meil, vice president of strategic mission innovation and chief data scientist at SAIC, said large language models, or LLMs, are not designed to be used for military and intelligence missions, and retrieval-augmented generation with reasoning, also known as RAG-R, is better suited for supporting faster decision-making.

In a new blog published on the company’s official website, Meil explores the limitations of LLMs.

SAIC Exec Discusses Why LLMs Fall Short for Military, Intelligence Missions - top government contractors - best government contracting event

Find out how AI is being used in federal government and defense operations at the Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 19. The event will feature leaders from federal agencies, Department of War components and across the GovCon industry. Get your tickets today.

Why Should Agencies Reconsider LLMs?

Meil explained that foundational models are trained on public text and lack access to mission-specific data, such as intelligence, surveillance and reconnaissance feeds; operations order updates; and raw intelligence or platform details. He said that because an LLM generates responses from statistical patterns rather than verified information, output may be incorrect. He warned that, in operational scenarios, these hallucinated responses can delay or disrupt missions.

The knowledge inside an LLM is fixed at the time of training, creating a potential problem. According to the executive, aging data leaves the model unaware of evolving rules of engagement and theater-level guidance or new tactics, techniques and procedures. LLMs also do not understand doctrinal constructs central to military planning or the difference between fires and effects, which means the technology cannot “reason through a kill chain, an acquisition lifecycle or a complex multi-domain targeting cycle,” he noted.

Additionally, the probabilistic nature of LLMs means the same question can produce different answers. Meil said non-deterministic systems are liabilities in environments that require traceability, auditability and consistency.

Finally, the executive stated that commercial models lack native access controls, classification enforcement and zero-trust features needed for data that may be compartmentalized or releasable under specific caveats, potentially creating security risks.

What Is RAG-R?

RAG-R can provide military and intelligence agencies with the capabilities that LLMs cannot. According to Meil, RAG-R combines the reasoning power of LLMs with mission-specific data in real-time, allowing the technology to deliver “decision aids that can keep pace with the operational tempo.”

The executive said he will discuss RAG-R and why it is “a game-changer for military and intelligence operations” in a future post.

ExecutiveBiz Logo

Sign Up Now! ExecutiveBiz provides you with Daily Updates and News Briefings about Artificial Intelligence

mm

Written by Elodie Collins

Joanne Isham. The CIA veteran has joined the board of SES Space & Defense.
SES Space & Defense Adds Joanne Isham to Board
Derek Britton. Seekr's SVP for government said SeekrGuard will help "ensure the right AI model is used for the task at hand."
Seekr Introduces SeekrGuard for AI Model Evaluation