By George Chedzhemov, SVP Client Services at BigID. BigID were shortlisted in the Best Cloud Data Management Solution category at the 2023/24 Cloud Awards.

The House lawmaker recently introduced a new AI bill, which mandates federal agencies to follow the National Institute of Standards and Technology’s (NIST) AI risk framework.

While this bill will go through many iterations before its adoption, the House’s AI bill is key for many reasons. Here’s why organizations looking to adopt AI need to be paying attention.

US AI bill gains momentum

The House AI bill is significant, as the House bill complements the Senate version, and the presence of both House and Senate versions of a bill is a critical step in the U.S. legislative process. While the Senate version lays the groundwork, the House bill’s introduction indicates broader legislative support and greatly increases the chances of the bill becoming law. This dual approach ensures that both chambers of Congress are engaged in refining and endorsing the legislation, which can lead to more effective and comprehensive regulations.

By requiring federal agencies to follow the NIST framework, the government sets a strong example for private industry and academia. This can encourage the adoption of similar standards outside of federal agencies, leading to a more uniform approach to AI risk management across the country.

Why NIST?

The NIST AI risk framework is highly respected by security and AI industry professionals for its comprehensive approach to managing AI risks. It’s designed to standardize practices across different federal agencies, ensuring a consistent approach to AI deployment and risk mitigation. This standardization is crucial in a field as diverse and rapidly evolving as AI, where disparate approaches can lead to inefficiencies and increased risks.

The NIST framework in particular is considered a robust guideline for identifying and mitigating risks associated with AI technologies. A wider adoption could lead to more responsible AI use across the board, helping to more effectively address issues such as inherent model bias, data privacy, model transparency, and information security for both training data as well as AI model outputs.

Outside of AI, NIST is well known and respected for its technical expertise and long history of developing standards and guidelines in various technology domains. Its reputation lends credibility to the frameworks it develops. NIST frameworks are widely considered to have a thorough and systematic approach to addressing various technology risks. They cover a broad range of issues, including ethical considerations, privacy, security, and reliability, making them a comprehensive guide for technology and information security professionals.

The AI framework in particular is designed to be practical and applicable across various industries and government agencies, and it provides actionable guidance which organizations can realistically implement. NIST’s emphasis on risk management is especially relevant for AI applications, where risks can be complex and multifaceted. The NIST AI framework can help organizations identify, assess, and mitigate these risks effectively.

Finally, NIST frameworks are regularly updated to reflect new developments and challenges in technology, which is particularly important in a rapidly developing and evolving field of Artificial Intelligence.

Tips organizations can take to be NIST compliant

Regulating AI will only continue to be a hot topic in the coming years and this bill is a step in the right direction. To better prepare for the inevitable, organizations need to take steps to be NIST compliant now. Organizations looking to align with the NIST AI risk framework can take several concrete steps:

  1. Understand the framework: Organizations should start by becoming thoroughly familiar with the NIST framework’s overarching guidelines and principles. This involves reviewing the framework’s initial details, but also crucially staying up to date with any changes or revisions, since the guidelines are likely to evolve as the AI technologies rapidly develop, mature, and gain advanced capabilities.
  2. Engage in assessment and planning activities: Conduct an initial assessment in order to identify areas where current AI practices deviate from NIST guidelines. Based on this assessment, organizations can effectively develop a strategic plan to align practices with framework recommendations.
  3. Training and awareness: Ensuring employees are aware of, and well versed in, principles and practices outlined in the NIST framework is crucial to attaining alignment. This includes regular training sessions and updates on inevitable updates and changes to the framework.
  4. Implementation and compliance: Organizations should implement necessary changes to their AI systems and processes in order to more effectively comply with the framework. This may involve technical revisions, programmatic updates, process redesigns, improvements and changes to model evaluation criteria, or introduction of new oversight mechanisms.
  5. Continuous monitoring and improvement: Compliance with the NIST framework is not a one-time effort. Organizations must continuously monitor their AI systems and practices against the framework and make necessary improvements over time.

What’s next?

A bill that is focused around NIST Frameworks, is a step in the right direction. NIST leverages some of the best minds in the security industry (in both the public and private sector). We have continued to see over the last several years bills and executive orders focused ideas with no substantial frameworks to follow. This new AI bill will give organizations tactical guidelines to follow while enabling their organizations A.I utilization!

AI and machine learning will play an increasingly pivotal role in enhancing cybersecurity defenses. For example, technologies utilizing AI for threat identification will continue to transform cybersecurity by offering increasingly advanced capabilities in threat identification and neutralization​. These will allow cybersecurity professionals to keep up with, and continue to neutralize, offensive capabilities deployed by threat actors, who will also be increasingly augmenting their capabilities and sophistication by turning to generative AI and LLMs. The technology further raises the stakes in the ongoing cat and mouse game, further fueling the arms race between information security professionals and various threat actors.

Overall, the key trends for 2024 in cybersecurity revolve around the advanced use of AI and machine learning, increasing adoption of LLMs in various infosec areas, continued needs for proactive defense strategies against sophisticated threat actors, and of course the old but persistent threats such as ransomware, malware, and spyware. Addressing all of these on an ongoing basis will require balancing robust security measures with maintaining a user-friendly, efficient, and non-disruptive end-user experience. Businesses of all sizes and in all industries must stay vigilant, informed, and agile, adapting their strategies to effectively navigate these emerging challenges.