When reviewing code, you often uncover problematic patterns or weaknesses. Unfortunately, discovering something concerning doesn't automatically mean you have found an exploitable vulnerability. It merely indicates that something isn't right. This scenario often leads to a situation I've summarized as:
"Two minutes to find the bug, two days to prove exploitability."
This raises a critical question: Should you always strive to prove exploitability? Particularly if you work on an internal application security team, especially if your AppSec ratio—the number of application security engineers to software engineers—is very low, the answer isn't always straightforward.
Does it always need to be this complicated?
Consider this: Should you really spend two days of valuable AppSec engineer time meticulously proving exploitability for each identified issue? Or would it be more effective to spend four hours of a software engineer's time applying a simple patch?
For example, issues like insufficient randomness in session tokens, directory traversal attacks prevented by the application's routing logic, or missing HTML encoding in redirect pages clearly indicate insecure practices. Sometimes, it's easier and more practical for AppSec engineers to provide a straightforward patch rather than dedicating extensive effort to proving exploitability.
I discussed this topic further in a video: "Just send a PR"
It gets worse when people rely on a Web Application Firewall (WAF) or rate limiting as adequate controls. Statements like, "It's here, but it's not exploitable thanks to our WAF," often lead to complacency.
People also tend to confuse "We don't know how to exploit it" with "It is not exploitable." Knowledge about the exploitability of weaknesses continually evolves. You never truly know if something is permanently non-exploitable—you only know you couldn't exploit it right now, given your current knowledge, skills, and publicly available information within the security community.
An excellent example of this evolving exploitability is deserialization vulnerabilities. Historically, issues around deserialization in languages like Java and Ruby were often considered theoretical until practical exploits and techniques emerged over time, significantly altering their perceived risk and urgency.
Obviously, if your job is to sell vulnerabilities, exploitability is crucial. But what if your primary role is to improve application security and make applications more resilient?
Even in the world of bug bounties, researchers often hold onto chains of critical weaknesses because they're missing a single piece—like an open redirect—to make the entire chain exploitable. Is this obsessive pursuit of exploitability genuinely beneficial, or is it counterproductive? Should bug bounty programs offer a "free part in the chain"—for instance, offering an open redirect or CSP bypass as "freebies" if part of a chain?
Perhaps it's time to rethink our approach. Maybe not every weakness needs a full proof-of-exploit. Perhaps investing resources into quick remediation is more pragmatic, cost-effective, and better aligned with real-world risk management.