When Code Is Cheap, What Happens to AppSec?

Published: 10 Feb 2026

With Anthropic's Opus 4.5, Ralph Wiggum Loop and GastOwn, a few people on the bleeding edge of AI-based software development are going to bed and waking up to fully working applications, not toy demos, but real features wired to a database, an auth flow, and something that actually runs.

Because of that, more and more people are discussing the death of software developer, the people who write code. Not because code disappears, but because producing it is getting faster, more automated, and easier to delegate. The idea is that only software engineer will stay, the people who architect software, give directions, and review outcomes. In other words, less code production, more direction and review.

I am not fully buying the "developers will disappear" narrative. But regardless of where you land, you have to admit that writing code is quickly evolving right now, and AppSec will have to evolve with it.

Questions for AppSec Enthusiasts

This brings a few questions for the AppSec enthusiasts:

  • If the people shipping features are not typing most of the code anymore, what's the point of secure coding training? Do we shift more of that effort to "secure reviewing" and specification, the prompts, the constraints, and the tests that guard the behavior?
  • If code is produced faster than humans can read it, what does code review become? Do we move from line-by-line review to reviewing diffs with stronger guardrails, threat models, and automated checks that we actually trust?
  • What is the future of pentesting in that world? If your job is mostly running tools, what happens when the tools run themselves, and the output is already summarized for the engineer? What is the pentesting version of a software developer? What is the pentesting version of a software engineer? Is it tool runner vs investigator?
  • What happens to vulnerability management in dependencies? If AI writes most of the code, AI can probably keep patching to the latest versions while engineers are sleeping or getting coffee. And then why would you care about reachability? You do not ask yourself if or when you should patch, AI just patches regardless. Breaking changes? Can the same loop handle them too? And if you patch all the time, does that make breaking changes less scary over time, since you never fall behind?
  • How do we scale AppSec? Are building blocks more important than ever? And how do we ensure coding agents always use those building blocks? Should AppSec engineers spend more time tweaking AGENTS.md to include the right security guardrails, or reviewing source code?
  • What is the impact on buy vs build? Should you still buy security products, or are you better off leveraging coding agents to build something closer to your actual need, and accept the maintenance and responsibility that comes with it?
  • What happens to people who cannot use those coding agents, for development and for security reviews? The gap between open source capabilities and closed capabilities seems to be widening at the moment.
  • And the big one: how do we ensure AI-generated code is safe, and stays safe over time? Not just on day one, but after refactors, dependency updates, and "one small change" that reopens the same class of bug.
Not Theoretical

This is not theoretical. I recently experimented with Claude Code skills and reviewing JWT libraries. I found a few interesting issues, a few signature bypasses, a lot of non-constant-time comparisons, and a few libraries supporting the None algorithm (including this vulnerability I reported: GHSA-88q6-jcjg-hvmw).

Conclusion

We are living in very exciting times for AppSec and even for humanity. Regardless of your opinion, you have to ask yourself those questions.

Photo of Louis Nyffenegger
Written by Louis Nyffenegger
Founder and CEO @PentesterLab