A House committee advanced a bill that would allow the National Institute of Standards and Technology (NIST) to create a formal process for reporting security vulnerabilities in artificial intelligence (AI) systems. As is the case for many security projects, funding concerns could stymie the initiative.
The AI Incident Reporting and Security Enhancement Act was approved by voice vote by the House Science, Space and Technology committee on Wednesday. The bill was introduced by a bipartisan trio of representatives from North Carolina, California, and Virginia. If approved by the full Congress and signed into law, it would give NIST the mandate to incorporate AI systems in the National Vulnerability Database (NVD).
NVD is the federal government’s centralized repository for tracking security vulnerabilities in software and hardware. In its current form, the bill would add to the workload of the already-beleaguered NIST teams managing the NVD. NIST earlier this year paused updating data on reported vulnerabilities, in a move program manager Tanya Brewer said was the result of budget cuts, flat staff growth, and an increase in database-related email traffic.
The bill specifies that the increased workload for NIST would be “subject to the availability of funding,” but Rep. Deborah Ross (D-NC), a sponsor of the bill, said that they were aware of “significant funding and scaling challenges” NIST already experienced maintaining the database.
“My colleagues and I on this committee are actively exploring solutions to help NIST address this problem and get the money,” she said.
Even though the bill was approved in committee, some committee members expressed concern about some of the language used in the bill. There were concerns that terms such as “substantial artificial intelligence security incident” and “intelligence incident” would need to be clarified to make it more likely that the bill would pass. This kind of specificity is also a bigger concern in Congress in the wake of the Supreme Court overturning the Chevron doctrine.
The bill would also require NIST to consult with other federal agencies, like the Cybersecurity and Infrastructure Security Agency, private-sector organizations, standards organizations, and civil society groups, to develop a common lexicon for reporting AI cybersecurity incidents.