NIST AI Control Overlays Concept Paper

NIST AI Control Overlays
Photo by Google DeepMind

The National Institute of Standards and Technology (NIST) is developing control overlays for securing Artificial Intelligence (AI) systems to help organizations manage cybersecurity risks associated with various AI use cases, including generative AI and predictive AI.  These overlays are designed to help organizations manage cybersecurity risks associated with various AI applications.

The NIST AI control overlays are part of a Cyber AI Profile that is being developed to guide organizations in managing cybersecurity risks associated with artificial intelligence.  This profile also includes the NIST AI Risk Management Framework (AI RMF) which provides structured guidance throughout the AI lifecycle, from development to deployment and decommissioning.

Proposed AI Use Cases

The overlays are being developed in a structured approach to address security risks associated specific use cases.  The study has proposed five use cases in this initial release.

Adapting and Using Generative AI – Assistant/Large Language Model (LLM)

This case represents organizations utilizing AI for content generation.  In this case content is created based on user prompts and pattern recognition involving large datasets.  The outputs could include summaries and analysis of data using on-premise or third-party LLM.

Using and Fine-Tuning Predictive AI

Predictive AI can be used to analyze historical data to predict future outcomes.  Applications for this case could include recommendation services, resume reviews, and credit underwriting.  When utilizing these workflow automations, model training, deployment, and maintenance risks should be addressed.  

Using AI Agent Systems (AI Agents) – Single Agent

Single-agent AI systems utilize one intelligent agent to perform tasks or make decisions independently. They can be used for focused analysis of datasets such as managing customer service inquiries or in the performance of repetitive tasks that do not require collaboration or complex decision-making.  Other applications include providing contextual insights or coding assistance.

Using AI Agent Systems (AI Agents) – Multi-Agent

Multi-Agent systems are composed of multiple intelligent agents that interact and collaborate to achieve specific goals. Each agent operates autonomously but works together with others to solve complex problems that a single agent might struggle with.  Typical applications could include expense reimbursement or optimizing production processes.

Security Controls for AI Developers

It is essential that AI developers utilize implement security controls that mitigate risk by using secure coding practices, and regular risk assessments.  This includes best practices outlined in NIST SP 800-218 Secure Software Development Framework.  SP 800-218 stresses core practices for software development and deployment of secure code.

SP 800-53 Security and Privacy Controls

NIST is proposing using SP 800-53 controls for AI security controls.  This standard provides a comprehensive catalog of security and privacy controls for information systems. Initially designed for U.S. federal agencies, it has since been adapted for broader use across various sectors.  NIST sites this organizational familiarity as justification for adoption of this standard for secure AI.

Addressing Information Security Concerns

AI is transforming the business landscape by enhancing efficiency, productivity, and decision-making.  However, The use of AI raises concerns about the security and ethical handling of sensitive data.  AI systems often connect with various data sources, APIs, and devices, creating more opportunities for cybercriminals to exploit vulnerabilities.  Additionally, AI can circumvent established security measures, making it harder to enforce policies.

AI adds a new dimension to the already growing demands of cybersecurity responsibilities on small to medium businesses but given the rapid rate of adoption in most sectors, security concerns must be addressed.  This can be a challenge given the complexity of requirements and the limited supply of personnel qualified to address these tasks.

CVG Strategy Information Security Management System Consultants

CVG Strategy can assist your organization meet the challenges in meeting the CMMC final rule.  We are dedicated to helping small businesses navigate federal regulations and contract requirements for Quality Management, Cybersecurity, Export Compliance, and Test and Evaluation. We can help you meet your information security management system goals.  CVG Strategy QMS experts can provide the training required to understand and engage in a ISMS and make it meet desired objectives.

Identify CUI Areas with CVG Strategy Signs

CVG Strategy provides signs to identify areas containing CUI and export controlled items. These signs should be posted at all facility entrances where products are being produced or services are being performed that are under the control of the U.S. Department of State Directorate of Defense Trade Controls (DDTC) and are subject to the International Traffic in Arms Regulations per title 22, Code of Federal Regulations (CFR), Parts 120-130.

Kevin Gholston

Share this post