The federal government is expanding its deployment of artificial intelligence to deliver public services, but Eric Trexler, senior vice president of public sector at Palo Alto Networks, warned that AI use without security creates a vulnerability that adversaries can exploit to cause real harm.
In an interview with FedTech published Tuesday, the executive discussed the impact of AI in federal security operations.

Learn about strategies to integrate artificial intelligence and introduce automation into federal systems and missions at the Potomac Officers Club’s 2026 Artificial Intelligence Summit on March 18. The event will host a panel on Building Mission-Ready AI Infrastructure: Designing a Data and Security Foundation for the Future and other discussions on AI in the federal landscape. Sign up today to secure your spot.
Why Are the Risks of AI in Government?
While AI can deliver speed and scale to government operations, Trexler emphasized the importance of AI security baked into every level of the system to protect personally identifiable information and sensitive data.
“You don’t want information leaving the organization. You don’t want anyone poisoning your large language models so the result sets your people rely on are wrong,” he told the magazine. “AI security has to be present at every level — at the point where the end user interacts with the system, within the systems themselves and in the agents and infrastructure.”
He also revealed during the interview that Palo Alto’s Unit 42, a team of cybersecurity experts, has observed a “hundredfold increase in the speed of attack creation with AI.” He added that the team is also seeing 31 billion attacks per day.
How Is AI Affecting Identity Security?
During the interview, he also discussed the challenge of getting identity right to secure federal systems, especially as agencies employ AI. He said that, by 2027, there will be 10 or more machine identities for every human identity within an organization.
In a blog post in early February, Trexler also listed identity as one of the key government cybersecurity trends in 2026. He explained at the time that the line between “identity” and “attack surface” is collapsing due to deepfakes, or AI-generated voices and video that impersonate real people.
The executive told FedTech that organizations must understand what systems they currently have in place and what AI agents can do.


