Vulnerability Research

Point: Bruce Schneier / Counterpoint: Marcus Ranum

One of the vulnerabilities that was exploited by the Morris worm was a buffer overflow in BSD fingerd(8). You argue that searching out vulnerabilities and exposing them is going to help improve the quality of software, but it obviously has not Ė the last 20 years of software development (donít call it "engineering," please!) absolutely refutes your position. Not only do we still have buffer overflows, I think itís safe to say that there has not been a single category of vulnerabilities that has been definitively eradicated. Thatís where proponents of vulnerability "research" make a basic mistake: if you want to improve things, you need to search for cures against categories of problems, not individual instances. In general, the state of vulnerability "research" has remained stuck at "look for one more bug in an important piece of software so I can collect my 15 seconds of fame, a Ďthank youí note extorted from a vendor, and my cash bounty from vulnerability market." Thatís not "research"; thatís just plain "search." Ė maybe we could call it "vulnerability mining" or something more indicative of its intellectual status. But, letís not: real mining is hard, dangerous, important work.

If you look at buffer overflows as an example of a category of software flaws, you can see the tremendous progress weíve made in eradicating them: zero. There are development and runtime tools intended to help detect or mitigate that category of flaw, but Iíd estimate that fewer than one in a thousand full-time programmers has ever used one. So whatís the value of "researching" a new buffer overflow in some browser or other? Thatís akin to making the momentous discovery that youíve got ants in your kitchen. Nobody in their right mind worries about every single ant thatís in their kitchen Ė they search for broadly applicable responses to insect invasion as a category of problem. Thatís the right way to do things. But, of course, the vulnerability "researchers" arenít going to do anything like that because theyíre perfectly happy to keep pointing out individual ants as long as they make $2,000 - $10,000 per ant.

The economics of the vulnerability game donít include "making software better" as one of the options. They do, however, include "making software more expensive." Whenever a software giant like Microsoft or Oracle has to force-bump their QA process to rush out a patch, it costs a lot of money. Where does that money come from? Us, of course. When I started in the software business, 10% annual maintenance was considered egregious, but now companies are demanding 20% and sometimes 25%. Why? The vulnerability game has given vendors a fantastic new way to "lock in" customers: if you stop buying maintenance and get off the upgrade hamster-wheel, youíre guaranteed to get reamed by some hack-robot within 6 months of your software getting out of date. Iíve always found it ironic that the vulnerability "researchers" like to sport the colors of the software counter-culture, but theyíre really a tool of the big software companies whose ankles they feed upon.

One place where we agree is on the theory that you need to think in terms of failure modes in order to build something failure-resistant. Or, as you put it, "think like an attacker." But, really, itís just a matter of understanding failure modes Ė whether itís an error from a hacking attempt or just a fumble-fingered user, software needs to be able to do the correct thing. Thatís Programming 101: check inputs, fail safely, donít expect the user to read the manual, etc. But we donít need thousands of people who know how to think like bad guys Ė we need dozens of them at most. Those are the guys who can look out for new categories of errors. New categories of errors donít come along very often Ė the last big one I remember was Kocherís CPU/timing attacks against public key exponents. Once he published that, the cryptography community added that category of problem to its list of things to worry about and moved on. Why is it that software development doesnít react similarly? The industryís response has been ineffective! Rather than trying to solve, for example, buffer overflows as a category of problem, weíve got software giants like Microsoft spending millions of dollars trying to track down individual buffer overflows in code to eradicate them. The vulnerability game, as itís being played right now, only encourages the eradication of individual ants, which is not an effective strategy if youíre out in a field having a picnic.

The biggest mistake people make about the vulnerability game is that they fall for the ideology that "exposing the problem will help." I can prove to you how wrong that is, simply by pointing at "Web 2.0" as an example. Has the last 20 years of what weíve learned about writing software been expressed in design of "Web 2.0"? Of course not! It canít even be said to have a "design." If showing people what vulnerabilities can do were going to somehow encourage software developers to be more careful about programming, "Web 2.0" would not be happening. Trust model? Whatís that? The vulnerability "researchers" are already sharpening their knives for the coming feast. If we were really interested in making software more secure, weíd be trying to get the software development environments to facilitate the development of safer code: fix entire categories of bugs at the point of maximum leverage.

If your argument is that vulnerability "research" helps teach us how to make better software, it would carry some weight if software were getting better rather than more expensive and complex. In fact, the opposite is happening Ė and it scares me.