"In order to know how to defend, you must first know how to attack." "Learning the tools and techniques of the bad guys helps you defend against them!" Blah, blah, blah - conference brochures, popular books, trade press, and websites all enthusiastically support the notion that to be a good security practitioner, you need to know the tricks of the bad guys. Kevin Mitnick's "The art of deception" (Wiley and Sons) sells well to executives and industry practitioners alike; it's a disingenous amalgam of tall tales and commonsense - but - is it valuable?
At a certain point, you don't need to know the infinite details of possible attacks; it's simply enough to know the broad categories of attacks. Having someone stand up and say, "here are 400 different variations on a buffer overrun" is not interesting. Any coder who is writing mission-critical software (especially on network operating systems that don't enforce any kind of permissions model) ought to know about buffer overruns. If they don't, they are not a good programmer. It's as simple as that. But having a programmer that knows how to safely handle network I/O and application input means that, until some entirely new attack paradigm is invented, avoiding buffer overruns is a mere "implementation detail." New attack paradigms are invented only fairly rarely. For example, we've known about buffer overruns since at least the mid 1980's; the Morris internet worm of '87 used a buffer overrun in fingerd. On the other hand, key-size estimation via CPU timing attacks (used against public key cryptography) was a brilliant new attack paradigm when it was first disclosed in the early 1990's. Todays cryptographers are expected to know about that type of attack and design around it, just as today's programmers are expected to know about buffer overruns and code around them. Fundamental breakthroughs in offense (or defensive) techniques just don't happen very often. When they do, people catch on to them pretty quickly, and life goes on.
So what does that have to do with the value of a hacker's skills? Well, first off, it means that it's completely pointless for a security practitioner to waste his time cataloguing all the different buffer overruns that are being exploited at any given time. That catalogue, however, is exactly what the hackers value: they want the "hot list" of active techniques. Try offering a hacker the fact that there's a buffer overrun in SunOs 3.2 fingerd and he's going to laugh at you (assuming he even knows what SunOs was). The knowledge-base of the hacker goes stale over time - fairly quickly, in fact - but the knowledge-base of a security practitioner stays relevant for years. Put another way: I've known about buffer overruns for 14 years and have tried to code around them - I don't need to worry about buffer overruns in someone else's code. I can just assume they're there. :) And they'll assume that there are buffer overruns in my code and we'll both try to design our systems to be secure in spite of the buffer overruns. That's layered design and that's another thing they don't teach you in hacker school. Hackers don't have to learn about layered design because they are searching for a single flaw or magic bullet that lets them get the job done. Someone who knows how to build a well-architected layered system with interlocking security functions will always be much more valuable than the guy who finds a flaw in a single piece of code or a single design.
In its ultimate expression, hacking is also a design discipline. The high-end of hacker skills uses knowledge of system design to predict where flaws are likely to manifest themselves in an architecture. Security analysts do this when they are reviewing or "red teaming" software - basically you use the inverse of the design principles to infer where there might be errors. For example, if you have a piece of software that listens to a network port, it is probably expecting to read or write data with other systems. That process of reading/writing, we have seen, is dangerous - a security analyst would start by examining the routines that did the network I/O, searching for buffer overruns or places where the network input might be used to compose a command or configuration option. When hackers attack a piece of code, if they have the source, the first thing they do is grep out all the instances of read(), bind() and recv(). Of course, a decent programmer will have already been there and done that. I know a code-wizard who puts cheerful taunts in his source code for the hackers to find:
/* Yes, hacker-punk, I thought of that. -bob */
I think we're at the end of the beginning of the computer security era. As with many new disciplines, it had its "wild and wooly" phase, but now it's going to mature. My guess is that the hacker is going to become less interesting at about the same rate as the security analyst - it's not that we've solved all the problems, but rather that the problems are becoming less interesting to solve. Many people have realized that software development needs to become an engineering discipline and, as such, it will adopt practices and processes that reduce the likelihood of uninteresting mistakes. The hackers that thrive on those uninteresting mistakes will dry up and blow away. The remaining handful - the ones skilled enough to discover new classes of mistakes - will mostly be funded researchers or industry researchers. I'll be glad when it's over because it'll mean we've moved from the stage where niggling little details were press-worthy and into a stage where we can go back to doing a bad job of dealing with the big picture, instead.
mjr.
In the air over Ohio, November, 2004