Kathryn Wang, principal of public sector at SandboxAQ, discussed in a recent interview the security risks that artificial intelligence poses, especially to operational technology in critical infrastructure. The executive said during a recent episode of the PrOTect It All podcast that organizations must consider what could benefit from being connected to AI, and what should remain analog systems.
Assessing AI Risks
During the interview, Wang mentioned a Google Gemini exploit discovered during the hacking and security conference DEFCON. Participants at the event found a way to utilize a vulnerability in Gemini’s calendar that would give hackers entry into OT. By telling the AI tool to summarize calendar events, hackers were able to hide an executable command within a calendar invite description, infiltrate a smart home network, and control the thermostat or open and close windows.
If a similar attack were to happen to critical infrastructure, it could adversely impact how the military operates.
“So why if you pick a very strategic location, let’s say an Air Force base, it depends on energy or power in order to open certain gates or to launch their planes,” explained Wang. “They’ll just be sitting there unable to mobilize.”
She also warned about the dangers of hackers manipulating data to confuse AI, cause disruptions in critical infrastructure operations and even hide a missile attack. According to the executive, poisoning data used to train AI is a common tactic among malicious Chinese actors.