Adam Groenhout

AI and Threat Modeling: Augmentation, Not Automation (Yet)

imgur

Main Points

Intro

Threat modeling is essential for security, but who feels as if they have the space and time for it, much less doing it well? Security and development teams are stretched thin, and thorough analysis often falls by the wayside. Generative AI offers a solution - automated, fast threat modeling. Can it truly deliver, or is it just another tool creating more noise? This post explores the reality of AI in threat modeling today, what the limitations are and to navigate them. Before diving in here, let's briefly define threat modeling for those who are unfamiliar and to level-set generally. Threat modeling is typically a structured process for understanding a given system, identifying applicable security threats and vulnerabilities, determining their likelihood and potential impact, and identifying ways to mitigate them. The primary output then is a set of actions to take to improve the security posture of the target system. For additional scoping, let’s use the Threat Modeling Manifesto. The central Threat Modeling Capability here is the “Tool Assisted Process” within the broader Creating Threat Modeling process area. AI can aid in other process areas to be sure, but our focus is on core threat modeling activities.

The Promise: AI as the Tireless Assistant

AI offers compelling advantages in threat modeling, acting as a powerful assistant by rapidly processing large amounts of information. While reliable end-to-end automation remains an alluring prospect for the future, AI can perform specific tactical actions now to significantly augment the threat modeling process. Effective use of these capabilities hinges on understanding their limitations. It also requires selecting appropriate use cases, crafting precise prompts, prioritizing high-quality input, and critically, maintaining rigorous human oversight with a focus on augmentation rather than replacement of human judgment

Key Benefits and Tactical Applications of AI in Threat Modeling:

AI can assist with various parts of the threat modeling process, providing several key benefits. However, for any application, the quality of the input provided to the AI is crucial. Clear, detailed, and accurate descriptions yield better results. Similarly, the specific AI model, tool used, and the way requests are designed (i.e. prompt engineering) significantly influence the quality and relevance of the response.

Attempting "End-to-End" Generation (with Major Caveats):

While the tactical uses are practical today, attempting to generate a complete threat model report solely from a high-level system description highlights AI's current limitations most starkly. The resulting output will likely be quite generic, based heavily on common patterns associated with the given technologies. It will almost certainly miss critical context-specific threats, inaccurately assess risk (especially business impact), and propose generic, potentially unsuitable mitigations. STRIDE GPT is a good example of a tool meant to produce complete threat modeling artifacts. It does this work as well as any other I’ve observed, although the limitations discussed here very much apply. Again, this one-shot approach does not reliably produce a complete threat model. However, it might serve as a starting point to build on. In that case, it requires comprehensive review, validation, and augmentation by human experts to be most valuable. While such an initial AI-generated model cannot replace a thorough human-led process, it might offer a rudimentary starting point or basic awareness in resource-constrained situations where no threat modeling would otherwise occur. In that scenario, the risks associated with its inherent limitations must still be acknowledged.

The Reality Check: Where AI Falls Short

Several critical systemic limitations and deficiencies stand in the way of AI working independently, reliably, and without close human control.

The Shift: The Evolving Role of the Human Expert

AI isn't replacing people who threat model; it's changing their role. The human expert's focus shifts. Instead of handling the entire process from initial documentation to final analysis manually, the emphasis moves towards partnership - guiding AI, validating its outputs, and providing the critical context and judgment AI lacks. Threat modelers are now responsible for curating and validating AI outputs, which involves critically evaluating the information and filtering out noise and inaccuracies. They must inject context by providing the business, operational, and technical understanding that AI systems lack, assess real-world impacts, and prioritize threats based on nuanced understanding. Threat modelers should focus on novelty by brainstorming creative attack scenarios and identifying unique misuse cases that AI might overlook. Ultimately, as was always the case, they are responsible for making the final judgment, which includes weighing trade-offs, making decisions about risk acceptance, and tailoring appropriate mitigations. It’s imperative that AI output is not simply accepted as is. To make all of this a success, threat modelers need new skills and an understanding of generative AI fundamentals. Skills include effective AI prompting, critical analysis of AI suggestions and overall output, and the ability to seamlessly blend AI-driven insights with human expertise.

The Horizon: What Needs to Happen for Full Automation?

For AI to truly automate threat modeling, significant advancements are needed:

Conclusion: Embrace the Assistant, Value the Expert

AI is already proving to be a valuable tool in the threat modeling toolkit. It can accelerate parts of the process, handle some drudgery, find more initial threats, and support prioritization of effort. That said, it is not magic, and it's certainly not (yet) a replacement for human expertise. The most effective approach today is a human-AI partnership. Leverage AI for speed and breadth on common tasks, but rely on human intelligence for depth, context, critical thinking, and the ultimate responsibility for securing systems. The foreseeable future of threat modeling isn't full automation; it's intelligent augmentation, empowering human experts to be more effective than ever before. Start experimenting now. Identify repetitive tasks where AI can assist, learn its limitations through practice, and focus your invaluable human expertise where it truly counts - on context, creativity, and critical judgment.