How AI-Generated Code Is Changing Secure Code Review

Published: 24 Feb 2025

I’ve been thinking a lot about AI-generated code lately—and the impact it has and will continue to have on security code reviews. With AI, developers are pushing code faster than ever. But what does that mean for application security teams trying to keep everything secure?

The (Mostly) Good News

AI does a pretty decent job of churning out code without the typical “low-hanging fruit” vulnerabilities. Basic SQL injection? Simple stuff. Vanilla cross-site scripting? That’s well documented all over the internet. Of course AI has that covered—these vulnerabilities appear in countless posts, Q&As, and tutorials, making them easy for AI to learn and detect.

But here’s the catch: AI can only look for what it knows to look for. If the training data doesn’t include that sneaky, sophisticated vulnerability, AI isn’t going to magically spot it. And let’s face it: there’s a mountain of discussion around the “easy” bugs, but detailed analyses of complex or rare exploits are far less common in public resources.

Why AI Struggles with Complex Bugs
1. Sparse Training Data

AI "learns" from existing content—articles, code repositories, vulnerability databases. The deeper, lesser-known exploits might not be widely documented, so the AI has never been exposed to them in the first place. That means new or rare vulnerabilities can slip by undetected.

2. Lack of Context

Writing secure software is all about the details. AI-generated code depends heavily on the prompts given and the context it has access to. If your prompt is incomplete, or if crucial security considerations aren’t spelled out, AI might produce code that looks fine but has hidden flaws.

3. Code Repetition

Developers using AI code generators often end up duplicating code. You can spot AI-generated code by the lack of DRY (Don’t Repeat Yourself). It’s partly due to how AI models structure responses and partly because they have limited context windows. Repeated code can hide vulnerabilities in multiple places if not carefully reviewed. Two almost identical functions may have very different security properties.

4. Developer Detachment

When you’re not writing every line of code yourself, you’re less intimately familiar with your application’s inner workings. AI can churn out pages of code in seconds, but if you only skim through it, you might miss deeper architectural or logic issues. In security, a simple oversight in logic can create a serious vulnerability. Secure code is based on a deep understanding from developers of the code base, if developers didn't write the code, it is more likely that they don't full understand it.

The Ongoing Challenge

It’s likely that low-hanging fruit will become less common thanks to AI-generated code and AI security scanners. That’s good news. But the harder bugs—subtle logic flaws, novel exploits, and context-specific vulnerabilities—are here to stay (and may even multiply).

AI is undeniably reshaping how we approach coding. We’re seeing fewer trivial vulnerabilities surface in AI-generated code, which is a major plus. But it also means we have to be even more vigilant about the subtler (and arguably more dangerous) issues lurking under the hood.

Deepen Your Knowledge of Complex Vulnerabilities

If you’d like to dive further into the intricate side of secure code review—where subtle logic flaws and advanced exploitation techniques come into play—check out my Code Review Training. In this course, I go beyond the basics and teach you how to spot and address the sophisticated bugs that AI tools (and many developers) often overlook.

Final Thoughts

AI can handle the grunt work of scanning for common bugs and writing boilerplate code, freeing security teams to tackle the more intricate threats. However, no AI model can replace the creativity, domain knowledge, and intuition of a seasoned security professional—at least not yet.

So by all means, let AI expedite development and basic security checks. Just remember to roll up your sleeves and dig into the critical details. That’s where you’ll catch the complex bugs that no AI has dreamed of yet—because they don’t exist in any training data. And that’s how we stay a step ahead in an ever-evolving security landscape.

Photo of Louis Nyffenegger
Written by Louis Nyffenegger
Founder and CEO @PentesterLab

Join the PentesterLab's Newsletter

Subscribe to get our latest content by email.

    We won't send you spam. Unsubscribe at any time.