The escalating integration of artificial intelligence (AI) in businesses worldwide is a notable trend. Currently, over a third of companies are implementing AI, and an additional 42% are actively exploring its potential benefits. On the other side, government agencies have increasingly started to benefit from artificial intelligence as well. However, this widespread adoption of AI comes with inherent risks, leading to increased scrutiny from lawmakers across the globe.
To address these concerns, on January 10, 2024, U.S. Congress members Ted W. Lieu, Zach Nunn, Don Beyer, and Marcus Molinaro introduced the Federal Artificial Intelligence Risk Management Act of 2024. The Federal Artificial Intelligence Risk Management Act, a bipartisan and bicameral bill, builds upon its 2023 predecessor and aims to influence the trajectory of AI governance within federal agencies. The legislation specifically emphasizes the adoption of the Artificial Intelligence Risk Management Framework (AI RMF) developed by the National Institute of Standards and Technology (NIST).
Representative Beyer emphasized the significance of safeguarding against potential risks associated with the federal government’s expanding use of innovative AI technology. In support of this objective, he highlighted the proposed bill, which mandates the implementation of robust risk mitigation and AI safety frameworks developed by NIST Beyer sees this legislative initiative as a pivotal starting point, asserting that it ensures federal agencies possess the essential tools to navigate the intricacies of AI. By doing so, the bill aims to guarantee both the reliability and efficacy of AI systems employed by the government, with the added intention of inspiring other organizations and companies to embrace comparable standards. In essence, this legislation establishes a foundational framework for harnessing the potential of AI to benefit the American people while upholding the highest standards of accountability and transparency.
On the other hand, representative Nunn emphasized the positive impact of technological advancement on society, asserting that when executed correctly, it has the potential to enhance government effectiveness. In the context of the federal government’s implementation of AI, Nunn stressed the need to prioritize the safety of Americans’ data and to maintain transparency regarding government actions.
The NIST AI RMF provides a comprehensive approach to ‘prevent, detect, mitigate, and manage AI risks.’ The framework employs a non-prescriptive, industry-agnostic methodology through four fundamental processes: Map, Measure, Manage, and Govern.
Map: Organizations need to comprehend the objectives of their AI system and the advantages it offers compared to alternative methods. Understanding the intended outcomes and deployment benefits is crucial.
Measure: Utilizing both quantitative and qualitative approaches is essential for evaluating the risks associated with the AI system. This involves an in-depth analysis to determine the level of risk and the extent to which the system can be considered untrustworthy.
Manage: The mitigation of identified risks is imperative, and organizations must prioritize higher-risk systems. Implementing iterative risk monitoring processes is necessary to address new and unforeseen risks that may arise during the deployment of the Artificial Intelligence system.
Govern: Cultivating a culture of risk management is vital, reinforced by appropriate structures, policies, and processes. Establishing a governance framework ensures that risk management practices are consistently applied throughout the organization.
Organizations should consider that the Federal AI Risk Management Act is not an isolated piece of legislation. Other state and federal laws, such as Virginia’s Artificial Intelligence Developer Act, Vermont’s acts on regulating AI developers, the Federal Farm Tech Act, and the Federal No Robot Bosses Act and Data Protection Laws also leverage the AI RMF.
In summary, with AI reshaping industries, embracing this technology with confidence and responsibility is emerging as a competitive advantage in the ever-evolving technological landscape. The enactment of the Federal Artificial Intelligence Risk Management Act of 2024 signifies a crucial stride in guaranteeing responsible and standardized AI deployment across organizations.
Businesses are urged to prioritize compliance with both existing and anticipated regulations, incorporating AI practices that align with compliance standards to mitigate potential legal risks. Given the intricate interplay between technology and law, successfully navigating the AI landscape demands a proactive approach to ensure both responsible and lawful deployment.