Two U.S. senators have introduced legislation designed to improve the tracking and processing of security incidents embedded in artificial intelligence. The proposed bill builds on current efforts within the federal government to monitor cybersecurity vulnerabilities but addresses the unique risks of AI, such as counter-AI, or techniques that manipulate and subvert an AI system.
On Wednesday, Sen. Mark R. Warren, D-Va, and Sen. Thom Tillis, R-NC, bipartisan co-chairs of the Senate Cybersecurity Caucus, unveiled the
The bill would also establish new functions, such as a public database to track voluntary reports of AI security and safety incidents and an Artificial Intelligence Security Center at the National Security Agency to boost AI research among the private sector and academics with a subsidized research testbed, and develop guidance around counter-AI techniques.
"As we continue to embrace all the opportunities that AI brings, it is imperative that we continue to safeguard against the threats posed by, and to, this new technology," said Warner in a press release. "Information sharing between the federal government and the private sector plays a crucial role."
Several companies and organizations involved in AI spoke in support of the bill.
"IBM is proud to support the Secure AI Act that expands the current work of NIST, Department of Homeland Security, and the NSA and addresses safety and security incidents in AI systems," said Christopher Padilla, vice president of government and regulatory Affairs for IBM, in a release. "We commend Senator Warner and Senator Tillis for building upon existing voluntary mechanisms to help harmonize efforts across the government."