Vulnerability Research

Point: Bruce Schneier / Counterpoint: Marcus Ranum

One of the vulnerabilities that was exploited by the Morris worm was a buffer overflow in BSD fingerd(8). You argue that searching out vulnerabilities and exposing them is going to help improve the quality of software, but it obviously has not – the last 20 years of software development (don’t call it "engineering," please!) absolutely refutes your position. Not only do we still have buffer overflows, I think it’s safe to say that there has not been a single category of vulnerabilities that has been definitively eradicated. That’s where proponents of vulnerability "research" make a basic mistake: if you want to improve things, you need to search for cures against categories of problems, not individual instances. In general, the state of vulnerability "research" has remained stuck at "look for one more bug in an important piece of software so I can collect my 15 seconds of fame, a ‘thank you’ note extorted from a vendor, and my cash bounty from vulnerability market." That’s not "research"; that’s just plain "search." – maybe we could call it "vulnerability mining" or something more indicative of its intellectual status. But, let’s not: real mining is hard, dangerous, important work.

If you look at buffer overflows as an example of a category of software flaws, you can see the tremendous progress we’ve made in eradicating them: zero. There are development and runtime tools intended to help detect or mitigate that category of flaw, but I’d estimate that fewer than one in a thousand full-time programmers has ever used one. So what’s the value of "researching" a new buffer overflow in some browser or other? That’s akin to making the momentous discovery that you’ve got ants in your kitchen. Nobody in their right mind worries about every single ant that’s in their kitchen – they search for broadly applicable responses to insect invasion as a category of problem. That’s the right way to do things. But, of course, the vulnerability "researchers" aren’t going to do anything like that because they’re perfectly happy to keep pointing out individual ants as long as they make $2,000 - $10,000 per ant.

The economics of the vulnerability game don’t include "making software better" as one of the options. They do, however, include "making software more expensive." Whenever a software giant like Microsoft or Oracle has to force-bump their QA process to rush out a patch, it costs a lot of money. Where does that money come from? Us, of course. When I started in the software business, 10% annual maintenance was considered egregious, but now companies are demanding 20% and sometimes 25%. Why? The vulnerability game has given vendors a fantastic new way to "lock in" customers: if you stop buying maintenance and get off the upgrade hamster-wheel, you’re guaranteed to get reamed by some hack-robot within 6 months of your software getting out of date. I’ve always found it ironic that the vulnerability "researchers" like to sport the colors of the software counter-culture, but they’re really a tool of the big software companies whose ankles they feed upon.

One place where we agree is on the theory that you need to think in terms of failure modes in order to build something failure-resistant. Or, as you put it, "think like an attacker." But, really, it’s just a matter of understanding failure modes – whether it’s an error from a hacking attempt or just a fumble-fingered user, software needs to be able to do the correct thing. That’s Programming 101: check inputs, fail safely, don’t expect the user to read the manual, etc. But we don’t need thousands of people who know how to think like bad guys – we need dozens of them at most. Those are the guys who can look out for new categories of errors. New categories of errors don’t come along very often – the last big one I remember was Kocher’s CPU/timing attacks against public key exponents. Once he published that, the cryptography community added that category of problem to its list of things to worry about and moved on. Why is it that software development doesn’t react similarly? The industry’s response has been ineffective! Rather than trying to solve, for example, buffer overflows as a category of problem, we’ve got software giants like Microsoft spending millions of dollars trying to track down individual buffer overflows in code to eradicate them. The vulnerability game, as it’s being played right now, only encourages the eradication of individual ants, which is not an effective strategy if you’re out in a field having a picnic.

The biggest mistake people make about the vulnerability game is that they fall for the ideology that "exposing the problem will help." I can prove to you how wrong that is, simply by pointing at "Web 2.0" as an example. Has the last 20 years of what we’ve learned about writing software been expressed in design of "Web 2.0"? Of course not! It can’t even be said to have a "design." If showing people what vulnerabilities can do were going to somehow encourage software developers to be more careful about programming, "Web 2.0" would not be happening. Trust model? What’s that? The vulnerability "researchers" are already sharpening their knives for the coming feast. If we were really interested in making software more secure, we’d be trying to get the software development environments to facilitate the development of safer code: fix entire categories of bugs at the point of maximum leverage.

If your argument is that vulnerability "research" helps teach us how to make better software, it would carry some weight if software were getting better rather than more expensive and complex. In fact, the opposite is happening – and it scares me.