Thumbnail

NIST General Artificial Intelligence Profile - It’s All In The “How”

· 5 min read

To mark two hundred and seventy days since President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development of AI in late July 2024, the White House announced new actions being taken by federal agencies in response to the Order, including releases from the Department of Commerce’s National Institute of Standards and Technology (NIST), and others.


The NIST releases include a Generative Artificial Intelligence Profile, which partially fulfills the Executive Order. NIST says this profile “can help organizations identify unique risks posed by generative AI and propose actions for generative AI risk management that best aligns with their goals and priorities.”


The Profile is designed as a “companion resource” for users of the NIST AI Risk Management Framework 1.0 (AI RMF 1.0) and serves as a practical use-case and cross-sectoral Profile of the AI RMF.


Its purpose is to help organizations meet legal and regulatory requirements and to offer advice on best practices for managing risks specific to Generative AI (GAI). In particular, the Profile identifies twelve key risks unique to, or otherwise amplified by, the use of GAI. The identified risks are:


  • Chemical, biological, radiological, or nuclear (CBRN) weapons information or capabilities
  • Confabulation (e.g., “hallucinations” or “fabrications”)
  • Dangerous, violent, or hateful content
  • Data privacy, particularly biometric, health, location, personally identifiable, or other sensitive data
  • Environmental impacts due to resource utilization in training GenAI models
  • Harmful bias and homogenization
  • Human-AI configurations (arrangement or interaction of humans and AI systems which may result in such things as “algorithmic aversion”, automation bias, or misalignment of goals)
  • Information integrity
  • Information security
  • Intellectual property
  • Obscene, degrading, and/or abusive content
  • Value chain and component integration (non-transparent/ untraceable integration of upstream third-party components (e.g., data acquisition and cleaning, supplier vetting across the AI lifecycle)



The Profile further provides more than 200 actions to help govern, map, measure, and manage these specific risks.


Making It Happen


Organizations are encouraged to consider their GAI risk tolerance and to develop governance policies accordingly. They are further advised to map their business needs and objectives with appropriate systems and to follow through by measuring their performance. In the effectuation of operational process management, however, the “how” of implementing GAI declares itself in earnest. Organizational deployers of GAI systems should have implementation at the forefront of their minds as they consider business goals, review team structures, and develop policies. Let’s review a limited selection of the actions suggested to consider the implications and some questions that come to mind:


Category: Human-AI Configuration. “Use feedback from internal and external AI Actors, users, individuals, and communities to assess the impact of AI-generated content.”

Question: How will organizational deployers efficiently and timely collect this feedback to report incidents and/or discontinue use of a GAI system, if needed?


Category: Information Integrity. “Use real-time auditing tools where they can be demonstrated to aid in the tracking and validation of the lineage and authenticity of AI-generated data.”

Question: How will these validation tools be incorporated efficiently for accuracy validation and explainability while integrating GAI into organizational workflow?


Category: Information Security. “Conduct after-action assessments for GAI system incidents to verify incident response and recovery processes are followed and effective, including to follow procedures for communicating incidents to relevant AI Actors and, where applicable, relevant legal and regulatory bodies”.

Question: How will audit trails be secured and made reliably and routinely available to meet transparency and explainability criteria?



These questions are clearly not exhaustive and are posed merely to “get the cogs whirring,” so to speak, on how the proposed standards may be realized. All the steps outlined in the Profile are essential. Indeed, beyond the identified risks, we also need to “expect the unexpected.” That said, the greatest immediate challenge is in the operational implementation of these systems. Ring-fencing GAI use for informational integrity and security, data recording, and information-sharing protocols are all critical to this process

R. Scott Jones

About R. Scott Jones

I am a Partner in Generative Consulting, an attorney and CEO of Veritai. I am a frequent writer on matters relating to Generative AI and its successful deployment, both from a user perspective and that of the wider community.

DISCLAIMER

The content here is for informational purposes only and does not constitute tax, business, legal nor investment advice. Protect your interests and consult your own advisors as necessary.