In 2022, the focus was on talent shortages in cybersecurity. Now, attention has shifted to the rapid development of generative Artificial Intelligence (AI) technology to supplement the workforce. The adoption of Large Language Models (LLMs) has led to the widespread use of AI chatbots and aids designed to assist human users. Over time, these advancements may enable non-cyber professionals to take on cybersecurity roles, provided hiring managers are open to it. More importantly, AI will allow cybersecurity practitioners to concentrate on more advanced attack vectors and detection techniques.
We have previously discussed this in “How Automated AI Code Analysis Can Scale Application Security.” It’s now crucial to step back and reflect on the broader implications, the impact, and the remaining challenges for AI in this field.
SDLC Alignment
The Software Development Lifecycle (SDLC) has always been the foundation for our AppSec program. The diagram above illustrates how AI capabilities align with the SDLC. While integrating AppSec into the early stages of the SDLC is widely accepted, we recommend a balanced approach. If resources are limited, proportioning time between automation to catch issues early and manual processes to find issues later may be the right formula. Ultimately, the best approach depends on the available resources and team structure.
Pro Tip: As AI products and features become more prevalent, they start to resemble Swarm Intelligence. We’re interested in how this concept will develop over time.
Design & Plan
Although it’s impossible to find security vulnerabilities when things are just shapes on a whiteboard, this stage is ideal for collaborating with developers to prevent future issues.
AI Threat Modeling:
Product teams are integrating AI with diagram tools, project planning tools, documentation products, source code managers (SCM), and work trackers. These integrations will enable LLMs to automatically build threat models, generate security requirements, and create security test cases. Open source projects to watch: stride-gpt and TaaC-AI.
Pro Tip: As these products mature, they could help scope penetration tests. While there’s still a missing link for accurate pentest scoping via lines of code (LOC), internal pentest functions can benefit from AI-generated threat models, security requirements, and security test cases.
Build & Test
This phase is where the hypothetical becomes reality. One of the biggest challenges is seamlessly including security checks for developers during coding and building, then providing accurate findings with practical remedies.
AI IDE Code & Test Case Generation:
Tools like GitHub’s Copilot have become the norm, with 46% of all code being AI-generated as of early 2023. While test case development and documentation creation are practical use cases, code creation has faced issues with accuracy and validation. Improvements in LLM accuracy and human validation processes are expected over the next few years.
Note: This isn’t exclusively a security risk but an operational one, highlighting the need for improvement.
MR/PR Analysis:
LLMs are well-suited for identifying risky commits. Products like CodeRabbit offer this for free, but building your own with specific prompts can also provide valuable insights.
AI-Driven Security Finding Triage:
AI can filter down large pools of security data into smaller, actionable sets. This helps AppSec Engineers confirm the validity of findings.
AI/LLM Remediation Guidance:
AI can effectively generate vulnerability remediation guidance and write-ups, a task often seen as undesirable but essential for AppSec Engineers.
Pro Tip: AI-generated code still requires SAST for independent checks, just as if it were created by a human.
Release & Operate
There’s a significant market opportunity in knowing where vulnerable code is deployed and assessing its risk. Projects like CrashAppSec Chalk are addressing this, but there’s room for more tools.
AI Control Management:
Layered cyber defense is crucial. AI, specifically ML, can help manage IAM permissions, removing excessive and unused permissions and addressing uncommon usage of common permissions.
Pro Tip: Similar outcomes are expected with WAF/Firewall Rule management and optimizations.
AI Data Analysis:
AI advancements are enhancing automated detection capabilities for sensitive data across enterprises, leading to better and faster categorization of data types.
Monitor & Respond
Security Operations (SecOps) programs play a vital role in identifying vulnerabilities and responding to threats. AI can assist with summarizing investigation details and creating new SOAR workflows.
AI-ify-ing the SOC Workflows:
AI can help with alert triage and risk-based alerting, getting the right alerts in front of humans and building logic for AI Threat Hunters.
Pro Tip: Non-AI options for note-taking include cultural integration, hiring technical writers, or simple formats for documentation.