Connect with us

Infra

CISA unveils guidelines for AI and critical infrastructure

Published

on

The Cybersecurity and Infrastructure Security Agency on Monday released safety and security guidelines for critical infrastructure, a move that comes just days after the Department of Homeland Security announced the formation of a safety and security board focused on the same topic. The guidelines for critical infrastructure owners and operators also fulfills CISA’s obligations under the Biden administration’s October executive order on artificial intelligence.   

The guidelines are meant to address both the opportunities made possible by artificial intelligence for critical infrastructure — which spans 16 sectors, including farming and information technology — and the ways it could be weaponized or misused. CISA instructs operators and owners of critical infrastructure to govern, map, measure, and manage their use of the technology, incorporating the National Institute of Standards and Technology’s AI risk management framework

“Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk,” CISA Director Jen Easterly in a statement. 

The guidelines emphasize a range of steps, including understanding the dependencies of AI vendors that operators might be working with and inventorying AI use cases. They also encourage critical infrastructure owners to create procedures for reporting AI security risks and continually testing AI systems for vulnerabilities. 

Opportunities related to AI span categories including operational awareness, customer service automation, physical security, and forecasting, according to the guidelines. At the same time, the new document also warns that AI risks to critical infrastructure could include attacks facilitated with AI, attacks aimed at AI systems, and “failures in AI design and implementation,” which could lead to potential malfunctions or other unintended consequences. 

“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks. Our Department is taking steps to identify and mitigate those threats,” Homeland Security Secretary Alejandro Mayorkas said in a statement. 

DHS has been especially active in recent months on artificial intelligence, most notably with the release of its AI roadmap in March. Earlier this month, the department announced that Office of Management and Budget alum Michael Boyce would lead its AI Corps, a group of 50 experts in the technology that the agency aims to hire through 2024. The department also brought on technology company executives — including Sam Altman of OpenAI and Sundar Pichai from Alphabet — to assist with its new board focused on AI and critical infrastructure. 


Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies.

Previously she was a reporter at Vox’s tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications.

You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Continue Reading