Intellectual Honesty

In this episode I'd like to talk a little bit about intellectual honesty and integrity.

What does that have to do with computer security? Well, frankly, I think it has a lot to do with computer security, because it's conspicuously lacking in many parts of the field. To me, intellectual honest and integrity are the fuel that makes science work - and, as many of you know, I've been arguing for years that this is a field that desperately needs a bit more scientific rigor.

There are two case-studies of intellectual dishonesty that I'd like to present, today.

The first one is how security surveys and polls are generally performed. I feel I had to throw the word "generally" in there because there may, someplace, be a security-related survey that isn't completely bogus - but I haven't run across it, yet. But, perhaps, like the Yeti or Sasquatch, there is actually a decent security-related survey out there so I'll say "generally."

One of the first things they teach you in Statistics 101 is how bias can creep into a sample. There are lots of ways, but one of the worst most newbie and amateurish way you can bias your data is to do what's called a "self-selected sample." A self-selected sample is one in which the people providing the data in your sample are the ones who wanted to provide it. My favorite example of a self-selected sample was a number of years ago a certain website ran a poll regarding something-or-other. I can't even remember what. When they started looking at the statistics, "the numbers came out all wrong" and they looked more closely at them and realized that what they had was a survey that correllated highly with the respondant being unemployed. Because that was who tended to have time to read through that particular site and be bored/have enough free time to take the survey. OOPS! And that story carries another example of how to be intellectually dishonest with a survey - "the numbers came out all wrong." In other words, the people running the survey already knew the results they expected, and opened an investigation when their expected results didn't dutifully appear. Does it make you wonder whether they would have investigated their statistics if the survey had turned out the way they expected it to?

There are several surveys every year that come in regarding computer security. Some of them are one-offs and others are annual affairs. Some of them are sponsored by conferences, others by vendors (a sure sign that there are going to be some really wonderful methodological flaws!) and others are earnest but misguided attempts by the trade press to get an idea what's going on in the industry. CIO Magazine did one survey about expected security expenditures and technology focus - what they did was have a form you could fill out if you felt like it and send in. On the form you could mark down how much security expenditure you had discretionary control over, what your job title was, etc. Uh, sure, of course I'm CEO. Of course I am going to spend $4 million on antivirus next week. So the survey is based on - well, we don't know what. That's where the intellectual honest aspect comes in. If the survey results said, "90% of respondents who claimed to be CIOs claimed that they were going to spend 40% of their IT budget on antivirus." If you're going to do a survey that appears to have something to do with what "it executives" are currently thinking, then you need to make sure that the respondents are actual IT executives - not just the bored ones who didn't have anything to do that day and decided to give the survey to their secretary to fill out between rounds of golf. Remember: when you open a survey so that anyone can take it, you can't call it a "survey of IT executives." It's perfectly possible that a bunch of survey-loving chimpanzees decided to indicate that they were spending all their IT dollars on bananas next year. The point is: nobody knows.

Whenever I used to see these bogus surveys, I used to write letters to the editors or conference organizers complaining about bad science. And - on the rare occasions that I got a response - it invariably was something like, "Lighten up you pedant. Sure there were some methodological flaws in our survey but the numbers are still useful - maybe they are wrong but they are better than nothing." What!?!?!?!?!!!! When did wrong become better than nothing? Well, I guess wrong is better than nothing as long as your results are what you wanted them to be. But if that's the level of intellectual honesty some of these people are aiming for, they may as well just make up a bunch of numbers and be done with it. It's nearly the same thing, just with a thin veneer of pseudo-science layered on top. Ultimately, these surveys become self-serving BS tantamount to asking the membership of the national rifle association whether they favor private gun ownership. Because that's what's going on here - they want to produce a bogo-number so that IT managers can go to their bosses and say "look! 90% of the people who claim to be CIOs in this survey claim to be prepared to spend 40% of their IT budget on bananas!" Oh, uh, I mean, antivirus.

In other words, what I think is going on here is that people use these bogus surveys to manipulate their management. Granted, sometimes senior management are too stupid about technology to understand that money must be invested in security - but the right way to do it is not to lie to them: present your case without resorting to cooked statistics and if they blow you off, then they blow you off.

You cannot turn a clueless senior executive into a good one by blowing smoke up his standard input stream.

Another huge bit of intellectual dishonesty that drives me nuts is the way that some parts of the security industry deliberately manipulate the media (and therefore the public) by taking advantage of their basic ignorance. I've ranted about this before in a lot of contexts - most specifically surrounding the disclosure debate (which I don't want to get into, today) but the trade press is so ignorant about, well, almost anything, that they pretty much print what they're told. Even when it's patently ridiculous. And the problem is getting worse because, now, thanks to internet blogs and podcasts (like this one!) you've got even fewer filters to weed out mistakes or deliberate disinformation. Sometimes I can't tell, to be honest with you, whether these people are just dumb/wrong or lying/spin-controlling. It's hard. Let's look at one recent example.

The German government has passed a law against hacking tools.

…and everyone in the security community starts hopping up and down about it. Eeek! They are taking my hacking tools away! (For some typical reactions see Bruce Schneier's Blog) "Security researchers"* are very upset. OK. Now, for the next piece of this I need you to listen carefully, because I'm going to get a little bit subtle.

Regardless of whether the law is well-drafted or not, or is stupid or not, the people who are jumping up and down about it are ignoring a huge piece of what's actually going on. When someone passes a law against something, it does not somehow automatically result in everyone who is on the fringes of that law getting smacked down. When you read all the mouth-foaming blog entries on this topic, they make it sound as if the SWAT teams are going to start kicking down doors and rounding people up, tomorrow. But if you think for a second about how law enforcement works (especially with something ambiguous like "hacker tools") you'll realize that your perception of the situation is being manipulated.

For simplicity I'm going to frame this in terms of how things work in the US - in Germany a few of the details change, but they're basically the same. In fact most societies under the rule of law work pretty much the same, because most societies under the rule of law have internalized the deep awareness that laws are always open to interpretation, and there need to be checks and balances built into the process of enforcing the law.

Before the police get a warrant to kick my door in so they can arrest me for possessing a copy of Nessus, they've got to convince a judge that a warrant is justified. And the cops aren't just going to go kicking in the doors of Nessus users at random - a district attorney is going to look at the case and decide whether or not he/she is going to gamble their career on it. It's as simple as that. A magistrate or district attorney that brings a bunch of stupid cases in front of a judge is going to wind up in a new line of work - "do you want schnitzels with that?" pretty quickly. Let me put that another way - there isn't a district attorney in the US who, if "hacking tools" were illegal, would try to charge me with possession of a hacking tool because I have Nessus. For exactly the same reason that there isn't a DA in the US who'd try to charge someone who designs locks for Mosler with possession of burglar tools.

Remember - what I'm talking about here is subtle - I don't want to get into an argument about whether or not Nessus is a hacking tool, or what's a "good tool" or what's a "bad tool." A magistrate in Germany or a DA in the US knows that the argument about what is a "hacking tool" or not is going to take place in the courtroom if they bring a case against someone. They're not going to bring that case in front of a judge unless they are fairly sure they will win it. By the same token, if they arrest some guy who is selling the "l33t-0-matic spyware writer's t00lkit" for having hacking tools, they might well think they've got a good enough case to give it a shot. Justice may be blind, but the underlying system has a basic presumption that it's being enforced by rational people. And, it pretty much is!

So, when the "security researchers" started hopping up and down about how awful it was that Germany made hacking tools illegal, what they were doing was being intellectually dishonest. Because they were trying to manipulate our collective perceptions of the situation by omitting a very important part of it - namely that enforcement of the law is not blind. This bothers me a great deal because it makes me suspect their motives. Frankly, if I were producing a "dual use" technology, I'd be a little concerned about its misuse. And, I'd feel morally obligated to try to minimize the risks of its being used inappropriately. First off, I'd do it because it's the right thing to do but secondly, I'd take some comfort in the awareness that, if someone tried to come after me legally in a case where my tool had been abused, I'd be able to argue that the abuser had to take additional steps to defeat or bypass the safety controls I'd put in place. If you think about it for a second, you'll see that examples of this kind of thing crop up all over the place. It's why chainsaw manufacturers put chain-brakes on their products: 1) it makes them safer and 2) if some idiot disables the chain-brake he's not going to get much sympathy if he complains that the saw was unsafe. This case with the "hacking tools" law is no different from that.

Challenge people's motives. (even mine)

I'm not trying to encourage everyone to become a horrible cynical nihilist like myself, but when someone starts telling you something is terrible, especially on TV or the web, you're abandoning your intellect if you simply accept what they're saying without examining it critically. I don't think I probably need to say this but: I doubt that many of the people making a fuss about this issue are sincere, intellectually honest, or have the public interest in mind.

Intellectual honesty is presenting all of the sides of a problem that you understand fairly.** It's being willing to say, "we can't perform scientific surveys of industry trends, so instead of a broad survey that presents questionable numbers, we're going to fall back on tight analysis of a few trend-setters." It's being willing to say "The Germans are trying to grapple with a complex problem and it's important for us to consider all the social controls that are in place."

Like most other new fields, computer security has its share of charlatans. And, like most charlatans, our charlatans are better at manipulating the media and spewing plausible-sounding nonsense than our more level-headed practitioners. To me, this appears to be human nature. While scientists and orderly thinkers are busy working on understanding the problem, the goofballs and carpet-baggers have plenty of time to mug for the cameras and give the journalists cool-sounding sound bites. If you think back over the last 10 years of computer security, there has been a tremendous amount of this kind of behavior. How can we fight it? I don't know - but maybe asking people to just "hang on, take a deep breath, and consider this problem from all the angles…" will help keep the mental playing field a bit level.

Thanks for listening!


* Somehow the term "security researcher" has come en vogue as a term to describe the plethora of lamers out there who sit, single-stepping through a debugger, so they can find a new buffer overrun in a product and claim their 15 seconds of fame by disclosing it. Where I come from, "security researchers" are people like Bill Cheswick, Steve Bellovin and Peter Neumann - each one of whom has contributed a tremendous amount of original thought to the field of computer security. Calling the bug-hunters and vulnerability pimps "security researchers" is an insult to real security researchers.

** Fairly does not mean equal time - which is another mistake the media often makes. If you spend 5 minutes interviewing a Nobel prize-winning physicist about the Big Bang you should not feel obligated to spend 5 minutes - or even 5 seconds - examining the opposing position of the superstitious, or all of the other conflicting crackpottery that's out there.