In September 2024, the US government made strides in implementing measures from Executive Order No. 14110, focusing on safe AI development and highlighting both federal and state responses to emerging technologies.
In September 2024, progress continued on implementing Executive Order No. 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (AI EO), issued by President Biden in October 2023. Key developments were observed in both governmental and state efforts related to artificial intelligence across the United States.
The U.S. Department of Commerce’s Bureau of Industry and Security (BIS) played a pivotal role, proposing updates to the technical thresholds that dictate when developers of dual-use AI models must report certain activities to the federal government. According to the Notice of Proposed Rulemaking issued on 9 September 2024, developers working with AI models exceeding 1026 floating-point operations per second (FLOPS) must comply with these reporting requirements. The rule also redefines computing clusters subject to these requirements, focusing on networking connections and operational capabilities rather than physical co-location.
On the same day, the U.S. Government Accountability Office (GAO) released a report evaluating how federal agencies are managing AI-related tasks in line with the AI EO. The GAO found that key federal offices, including the Executive Office of the President and the General Services Administration (GSA), had successfully implemented their AI management and talent recruitment plans. The GSA’s framework, for example, emphasizes prioritizing emerging AI technologies in cloud environments through the FedRAMP process.
Furthering international efforts, the U.S. Departments of Commerce and State announced an inaugural meeting of the International Network of AI Safety Institutes. Scheduled for 20-21 November 2024 in San Francisco, this meeting aims to foster collaboration among AI safety experts from ten founding members, including the US, the EU, Japan, and the UK. This network is envisioned to enhance global cooperation in AI safety standards.
Federal agencies have also been tasked with aligning their operations with the Office of Management and Budget’s (OMB) AI guidance as per Memorandum M-24-10. By late September, over 31 agencies, including NASA and the Department of Defense, publicised compliance plans addressing AI use case inventories and risk management practices.
Simultaneously, the White House initiated a Task Force on AI Datacenter Infrastructure on 12 September 2024, following a roundtable with industry stakeholders. This task force aims to streamline policies related to clean energy and infrastructure development for AI datacenters, a crucial element given the rising scale of AI operations.
On the state level, California made headlines following Governor Gavin Newsom’s veto of the Safe and Secure Innovation for Frontier AI Models Act (SB 1047) on 29 September 2024. The bill aimed to impose comprehensive security and reporting standards for AI development but was rejected by Newsom due to its stringent computational and financial thresholds, which he argued did not align with the actual risks posed by AI technologies. Newsom highlighted the potential for these standards to mislead the public about the nuances of AI safety.
These developments mark significant steps in regulating and managing the rapid progress of AI technology, demonstrating a focused effort by federal and state entities to harness AI safely and responsibly.
Source: Noah Wire Services