The rapid proliferation of Artificial Intelligence (AI) promises significant value for industry, consumers, and broader society, but as with many technologies, new risks from these advancements in AI must be managed to realize it’s full potential. The NIST AI Risk Management Framework (AI RMF) was developed to manage the benefits and risks to individuals, organizations, and society associated with AI and covers a wide range of risk ranging from safety to lack of transparency and accountability. For those of us at NIST working in cybersecurity, privacy and AI, a key concern is how advancements in the broad adoption of AI may impact current cybersecurity and privacy risks, risk management approaches and how these risk management approaches relate to each other at the enterprise level.
With respect to privacy, AI creates new re-identification risks, not only because of its analytic power across disparate datasets, but also because of potential data leakage from model training. AI’s predictive capabilities could also reveal greater insights about people as well as amplify behavioral tracking and surveillance. On the positive side, the analytic scope and broad reach of AI could be used to power personal privacy assistants that could help people better manage their privacy preferences across their online activities.
AI will create new opportunities and challenges to cybersecurity such as AI enabled cyber threats, use of AI to improve cybersecurity tools and capabilities, and ensuring AI systems are protected from traditional and emerging cybersecurity threats. Using AI for improving cybersecurity threat hunting, for example, could increase detection rates but might also increase the number of false positives. As a result, cybersecurity practitioners and other personnel may need different cybersecurity skills, and highlights the importance of ensuring that any solution is explainable and interpretable. Furthermore, as business units across an organization incorporate AI technology in their solutions, there will be a need to better understand the dependencies on data across the organization. Those responsible for managing cybersecurity risks may need to reevaluate the relative importance of data assets, update data asset inventory, and account for new threats and risks. Finally, AI-enabled threats such as an adversaries use of an AI voice generator might require an organization to update yearly anti-phishing training.
All of this highlights the critical need for standards, guidelines, tools, and practices to improve the management of cybersecurity and privacy in the age of AI, ensure the responsible adoption of AI for cybersecurity and privacy protection purposes, and identify important actions organizations must take to adapt their defensive response to AI-enabled offensive techniques. To meet this need and ensure a sustained focus on these important topics, NIST is establishing a program for the cybersecurity and privacy of AI and the use of AI for cybersecurity and privacy.
The program will build on existing NIST expertise, research and publications such as:
- The Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile (NIST SP 218A);
- Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2);
- Enabling privacy in the age of AI: Draft Guidelines for Evaluating Differential Privacy Guarantees (NIST SP 800-226);
- Recent introduction of Security of AI as a Competency Area as part of the NICE Workforce Framework for Cybersecurity (NICE Framework);
- Tools such as Dioptra – a test platform that aims to facilitate evaluations of machine learning algorithms, including cybersecurity, under a diverse set of conditions.
- A PETs Testbed which provides the capability to investigate privacy-enhancing technologies (PETs) and their respective suitability for specific use cases, including protecting machine learning models from privacy attacks.
The program aims to understand how advancements in AI may affect cybersecurity and privacy risks, identify needed adaptations for existing frameworks and guidance, and fill gaps in existing resources. The program will work with stakeholders across industry, government, and academia, and will play a leading role in U.S. and international efforts to secure the AI ecosystem. The program will coordinate with other NIST programs, federal agencies, and commercial entities as needed to ensure a holistic approach to addressing AI-related cybersecurity and privacy challenges and opportunities.
This program working through the NCCOE is kicking off a project to develop a community profile to adapt existing frameworks, starting with the Cybersecurity Framework, as well as understanding impacts to other frameworks such as the Privacy Framework, the AI Risk Management Framework and NICE Framework. This effort builds on other community profiles developed for other use cases and technologies. The AI community profile will start with a focus on managing three sources of AI related cybersecurity and privacy risks:
- Cybersecurity and privacy risks that arise from the use of AI by organizations, including securing AI systems, components, and machine learning infrastructures, and minimizing data leakage.
- Determining how to defend against AI-enabled attacks.
- Assisting organizations in the use of AI with their cyber defense activities and using AI to improve privacy protections.
We look forward to dialogue and collaboration on how to advance the program and help the cybersecurity and privacy community as they embrace the ubiquitous development and use of AI. Stay tuned tuned and visit the new program website for more information. Please send any feedback or questions to AICyber@nist.gov.