Adam Groenhout

AI Security Policy: Exploring Threats to Workforce AI Tools and Building Policy

imgur

Intro

The use of artificial intelligence (AI) tools has become increasingly prevalent in the workplace. While these tools offer numerous benefits, they also introduce security risks that must be addressed. This post aims to provide a concise overview of the key security considerations associated with AI tool usage and includes an example security policy to outline how these risks should be mitigated.

On Policy

What does good security policy look like? What does it take to make security policy most effective? These are not easy questions to answer, but I will give it a quick shot.

Good security policy informs and empowers readers to understand what appropriate security in a given context is supposed to look like, and to take action to manifest and maintain that picture. Security policy should illustrate the secure path for readers so that they understand which road(s) to follow and how to avoid deviation along the way.

Here are some attributes which should inform effective security policy. Note: these relate to policy content and not policy management and governance.

While the focus here is policy, the lines between this and other types of issuances (e.g. guidelines, standards, procedures) are blurred. For example, within this policy, there are guidelines that illuminate and reinforce more formal policy statements. Procedural elements are touched upon and any standards are not intentionally included.

On Threats

Many threats and risks apply to the use and misuse of AI tools. Policy must be informed by and crafted to broadly address these real-world concerns. Here is a simplified short list of examples.

These risks can materialize in many different ways. It’s important to understand what kinds of relevant threat actors exist, what conditions and weaknesses allow and enable their actions, and how those vulnerabilities are exploited.

AI Tools Attack Tree

Let’s take a close look at risks to data confidentiality, which is after all the primary focus of our example policy. We will examine this by using an attack tree. The root of this attack tree is employee use of public AI tools, and the branches cover the various ways in which data may be compromised in this context.

The two main branches are insider threats, originating from employee actions, and external threats, stemming from attacks on AI providers or vulnerabilities in AI systems themselves. Insider threats are further divided into unintentional actions like prompt engineering and output sharing errors, and intentional malicious actions such as data exfiltration and account exploitation. External threats consist of attacks on the AI provider's infrastructure and vulnerabilities inherent in the AI systems that could be exploited to expose data.

%%{init: {'theme': 'neutral'}}%%
graph LR
    A[Data Confidentiality Compromise] --> B{Employee Uses Public AI Tool};
    B --> E{Insider Threats};
    B --> F{External Threats};
    
%%{init: {'theme': 'neutral'}}%%
graph LR
    E{Insider Threats} --> E1[Unintentional Insider Actions];
    E1 --> E1a[Prompt Engineering Errors];
    E1 --> E1b[Output Sharing Errors];
    E1 --> E1c[Weak Account Security Practices];
    E1a --> E1aa[Lack of Data Handling Awareness];
    E1a --> E1ab[Insufficient AI Tool Training];
    E1a --> E1ac[Complex/Ambiguous Prompts];
    E1a --> E1ad[Oversharing in Prompts];
    E1ad --> E1ada[Misunderstanding Data Policies];
    E1ad --> E1adb[Unnecessary Context];
    E1ad --> E1adc[Convenience over Security];
    E1ad --> E1add[Pasting Full Documents/Emails];
    E1b --> E1ba[Shareable Link Creation];
    E1b --> E1bb[Insecure Link Settings];
    E1b --> E1bc[Link Mismanagement];
    E1ba --> E1baa[Accidental Link Generation];
    E1ba --> E1bab[Default Public Links];
    E1bb --> E1bba[Incorrect Permissions];
    E1bb --> E1bbb[No Password Protection];
    E1bb --> E1bbc[Policy Bypass via Links];
    E1bc --> E1bca[Accidental Broad Sharing];
    E1bc --> E1bcb[Prolonged Link Activity];
    E1bc --> E1bcc[Public Platform Sharing];
    E1c --> E1ca[Weak Passwords];
    E1c --> E1cb[Credential Reuse];
    E1c --> E1cc[Susceptible to Phishing];
    
%%{init: {'theme': 'neutral'}}%%
graph LR
    E{Insider Threats} --> E2[Intentional Malicious Insider Actions];
    E2 --> E2a[Direct Data Exfiltration via AI];
    E2 --> E2b[Intentional Risky Practices];
    E2 --> E2c[Account Exploitation - Intentional];
    E2 --> E2d[Provider-Side Abuse - Intentional];
    E2a --> E2aa[Login from Unmanaged Device];
    E2a --> E2ab[Copy-Paste to Personal Account];
    E2a --> E2ac[Download then Upload];
    E2a --> E2ad[Screenshot/Screen Recording];
    E2a --> E2ae[Share AI Output to Personal Channels];
    E2b --> E2ba[Malicious Prompting for Sensitive Output];
    E2b --> E2bb[Circumventing Security Controls];
    E2b --> E2bc[Ignoring Data Handling Guidelines];
    E2c --> E2ca[Sharing Account Credentials];
    E2c --> E2cb[Unauthorized Access for Data Exposure];
    E2d --> E2da[Exploiting Terms of Service for Data Leakage];
    E2d --> E2db[Intentionally Triggering Provider Data Misuse];
    
%%{init: {'theme': 'neutral'}}%%
graph LR
    F{External Threats} --> F1[External Attacks on AI Provider];
    F --> F2[AI Model Vulnerabilities];
    F1 --> F1a[AI Provider Breach];
    F1 --> F1b[Unauthorized Access to Provider Logs];
    F1b --> F1ba[Provider System Vulnerability];
    F1b --> F1bb[Provider System Compromise];
    F2 --> F2a[Data Leakage from AI Model];
    F2a --> F2aa[Direct Memorization Attacks];
    F2a --> F2ab[General Model Data Extraction Techniques];
    F2a --> F2ac[Bias-Leveraging Data Exploitation];
    

A key takeaway from this attack tree is broader understanding that data confidentiality in the arena of public AI tools is a complex interplay of human behavior, system security, and inherent AI model risks. It underscores the need for a multi-faceted approach to risk mitigation.

To effectively mitigate the security risks associated with employee AI tool use, organizations need layered defenses. Technical controls include data loss prevention (DLP) tools to detect and prevent sensitive upload of sensitive data and subsequent exfiltration. Strong authentication mechanisms like multi-factor authentication (MFA) may prevent account compromise and data exposure. Vendor risk management and due diligence is essential when selecting AI providers. Business continuity plans must address potential AI tool outages. Finally, human review of critical AI-generated content is important to catch errors and potential legal issues.

Building upon these technical and procedural controls, a cornerstone of the mitigation strategy is employee-facing policy. This policy serves as the foundation for shaping employee behavior and must be actively communicated and enforced. In effect, well-informed employees, operating under clear policy, are the first line of defense against threats.

Example Policy

This example policy is intentionally generic. A similar policy implemented within an organization should be tailored to that organization to be maximally relevant and effective. For example, if there are approved and sanctioned AI tools, employees should be directed to use them. Organization-specific information classifications should be referenced. Any AI governance bodies should be called out with descriptions of their function. Including organization-specific aspects like these is important to ensure that the policy is more engaging and applicable.


Workforce AI Security Policy

Purpose

Artificial intelligence (AI) tools, including AI agents, offer significant benefits for enhancing productivity and innovation However, AI use introduces risks requiring careful management. This policy covers the secure and responsible use of AI tools to protect against data breaches, operational disruptions, compliance failures, and legal liabilities. We are committed to protecting employee and customer data and this policy outlines key security measures to meet our commitments.

Scope

This policy applies to all workforce members (employees, contractors, consultants, and other workers) using AI tools, including AI agents. Workforce AI tools are external, third-party AI applications, services, and autonomous AI agents (like public large language models (LLM), generative AI, AI-powered productivity tools, and intelligent assistants capable of autonomous tasks) used to enhance productivity or for work-related tasks.

This policy does NOT apply to:

This AI Security Policy is part of our broader security framework, including the policies like the Acceptable Use Policy and Data Security Policy and relevant security controls. Familiarize yourself with these policies for a complete understanding of security expectations and requirements.

Contact the security team with any policy questions or concerns.

Key Risks

The following are key risks associated with AI tools addressed by this policy.

Data Security and Privacy Risks

Operational Risks

Legal and Compliance Risks

Protecting company information, operational integrity, legal standing, and regulatory compliance is crucial. This policy guides employees to use AI responsibly, balancing benefits with security, compliance, and operations.

AI Security Guidelines

Do the following to ensure security and use AI responsibly:

For AI Agents:

Avoid the following to protect company interests and ensure responsible AI usage:

Guidance for Determining Data Sensitivity

To use AI tools safely, carefully consider data sensitivity. "Yes" to any of these questions indicates sensitive data prohibited for public AI tools:

When unsure, assume data is sensitive and avoid its use in public AI tools.

Conclusion

This policy promotes safe and responsible AI tool use, balancing risks and benefits. AI technology and the associated security risks are quickly evolving, so policies must be carefully reviewed. By adhering to this policy, every employee will help protect company information, ensure operational continuity, and maintain legal compliance in the age of AI and increasingly autonomous systems.

Contact the security team for policy questions, concerns, feedback, or to report suspected AI-related security incidents. Your vigilance and proactive communication are critical for secure and productive AI use.


Wrapping Up

By implementing and adhering to a good AI security policy, organizations can effectively manage the security risks associated with their workforce using AI tools. As AI technology continues to advance and proliferate, it’s crucial to adapt security policy to keep pace. By staying informed and proactive, organizations can harness the power of AI while safeguarding their interests.