Need for Speed: AI, Security, and Productivity

Published: 03 Feb 2026

Security and privacy have always been about doing the right thing while balancing it with staying productive. People do not work in air-gapped environments for fun. They do it for security reasons, and that level of control usually costs real time and real money.

We have seen this trade-off before. Most teams did not move to the cloud "because cloud is cool". They moved because it made them faster, and security was not always thrilled ("There is no cloud, it's just someone else's computer"). The same tension showed up with Agile, or should I call it, as a security practitioner: "FRagile". It has always been about balancing risk (security and privacy) with rewards (productivity).

AI: The Same Problem, New Stakes

With AI, we are back to the same problem: productivity versus security. Things are moving faster than ever. Agents working day and night to improve your productivity. Agents with access to your calendar, root access to servers, your emails, maybe even bank accounts. Once again, it is about balancing productivity with security. People willing to sacrifice the most security often get the biggest boost in productivity, until they don't.

Security Gets the Boost Too

What's new this time is that security is getting the productivity boost too. If you have been paying attention, you saw that Trail of Bits, one of the leading security consultancies worldwide, are huge users of Claude Code:

"Nearly all of @trailofbits' 140 employees use Claude Code daily, and most in YOLO mode" (source)

Trail of Bits even released part of their Claude skills: https://github.com/trailofbits/skills. A lot of big names in security are also publishing about AI-assisted research: On the Coming Industrialisation of Exploit Generation with LLMs, Ask your LLM for receipts, and so many more. One thing is sure: we never saw the same level of enthusiasm from the security community for cloud or Agile.

What Happens If You Don't Use AI?

It raises a question: what happens to security people who are not using AI? In the past, you could avoid cloud or Agile, but this time, not using AI can make you less productive at doing security work. Are you going to be left behind?

Some may say you can run your own model locally. You can use Ollama, even Claude Code with Ollama (docs). But what is the impact of not using the best available model for the job, just because the best model forces you to share data with a third party?

Also, if developers of the application you are auditing are already sharing their source code with a third party, surely security auditors should be able to do the same. They already have the data after all. But if you are an AppSec consulting firm, do you have to swap models and tooling for every client based on their risk appetite and their developers' usage?

Conclusion

Before, security could sit on the side and watch the world burn. This time it feels different.

For once, the people trying to move fast and break things might be in security.

Photo of Louis Nyffenegger
Written by Louis Nyffenegger
Founder and CEO @PentesterLab