Episode 1: Welcome

Welcome. I'm Marcus Ranum. I'd like to greet you and invite you to join us for the inaugural edition of the "Rear Guard" Security Podcast.

What is the purpose of the "Rear Guard" Security Podcast? The idea of this is to present a series of interviews and opinion pieces about the state of computer security. The philosophy of guarding computer security. The past of computer security - and to look at some of the industry trends, where things are going, but largely where they came from. What ideas are new, what ideas are good, what ideas are old, what ideas are good but are being ignored -- looking at the state of the art, the state of inventions, and attempting to debunk a little bit of hype.

I've got a lot of interesting friends in the industry. Some of them are old school practitioners who have been around for a very long time. Some of my friends in this industry are people who started working with computer security the year that I was born. I'm 44 years old now. So we're talking people who have been around for a very long time and who have had a lot of time to think about these problems. I'm really looking forward to sitting down with them and getting a chance to interview them and cast some of their past of experiences in terms of the problems that we're dealing with today. I think that should be useful and interesting to all of us. I'm certainly looking forward to it.

A couple of production details as we go forward. The website, www.rearguardsecurity.com, is the primary interface for getting to anything related to the "Rear Guard" Podcast. If you want to email me, I'm mjr@ranum.com, or mjr@rearguardsecurity.com. I'm going to try to keep these sessions down to about a half an hour, give or take. We're not doing advertising, so you don't have to worry about being interrupted with advertising jingles or any of that kind of nonsense, unless I slip an advertising jingle in just as a way of amusing myself someplace in the process.

I'm not sponsored by anyone to do this. You'll notice I've listed sponsors on the site, and basically those are people who've agreed to help pay for the bandwidth. In response for that paying for the bandwidth, they're getting nothing, except for a sponsorship icon on the site. I'm not going to talk about specific products or technologies unless I'm doing a review or a critique of something that I've worked with. We've got absolutely zero agenda here, other than to talk about security and, hopefully, be thought provoking and interesting.

Since this is being produced as an amateur effort in my copious free time, I don't have a fancy audio editing team or a web team or any of that kind of stuff. I'm doing all my own production, and I'm hoping the audio is not going to sound too raw. But I'll warn you in advance not to be surprised if occasionally it does sound a little bit raw. I'm not going to edit this down and slick it up. The plan is to treat this like a series of extended conversations.

I'm extremely psyched about some of the guests that I've got lined up for the future. Some of the people who have already agreed to play: Peter Newman, Marv Shaffer, Bob Abbott, Dan Geer, Avi Rubin, Bruce Schneier. Folks in the industry who have "been there and done that," and I'm looking forward to grilling them. Maybe poking them, and in the best of all possible worlds having a really good fight with them about topics that are near and dear to us.

So I'd like to start off with a topic that's near and dear to me. And that's the question of are there any underlying rules of computer security. One of the things that comes up a whole lot in the industry is people come along and release products that essentially promise to accomplish what I think is the impossible. And a lot of people buy them. The current "hot topics" that I'm seeing getting a great deal of play right now in the industry are data leakage prevention and de-perimeterization. I'm hoping we'll have a session on those two topics eventually. They're a great example of what I consider promissing the impossible. But people read magazine articles and they get very excited about these things. And some people like myself sit back and go, "this is a bad idea and here's why." But it's very difficult to get people to understand the underlying rules of computer security so that they can get a feeling for why, when someone claims to do these things, that it is in fact impossible.

So what I thought I would do with this session is lay out what I think are the underlying rules of security, the rules that you just can't break.

When you talk about physical laws, you've got things like momentum -- the bigger they come, the harder they hit. Everything gets colder over time, and so forth. Those kinds of rules. If somebody comes along to a engineer and says that they have a perpetual motion machine, the first thing the engineeer is going to do is start asking about friction and stuff like that. They'll know right away that it's pure B.S. because perpetual motion violates the laws of how the universe works. Whereas in computer security, if people come along and they say, "I've got a 100% reliable, signature-less, anomaly detection system that never generates false positives or misses an attack." It's B.S., right?

Tell a physicist about your perpetual motion machine and they'll think ypu're talking B.S., but a computer security practitioner is going to ask, "ooh, how much does it cost?" So what I'd like to get you to the point where you understand what can and can't be done. When people come to you with these perpetual motion machines of computer security, I want you to poke them in the eye and tell them it's from me.

I spend too much time reading the writings of scientists. I've always been interested in science. I've always felt that computer security was an area that could use a great deal of sciencing up. I think that would help a lot.

Internet security has got very little to do with science. We are an art form in which we try to encourage people to spend large amounts of money to do what we think they should do. But when we actually try to justify why we think they should do it, our answers usually fall down to belated hand waving. I'd like to talk a little bit about that, as well, in context.

One of the questions is, what is a science? How do you do science, and what would we do if we were treating computer security as if was a science? Science is all about method. The scientific method is related specifically to repeatability and measurability. Those are the two values of something that you're going to bring underneath the rubric of a science. You want to be able to say, I did this thing and it created the following effects. In fact, every time I did this thing, it created the following effect.

The next step from that is to be able to measure the effect. Then you can say, well, I varied how hard I did this thing, and it varied how hard the thing was affected, and you can start to figure things out from there. In computer security, in computer science, we don't seem to do a great deal of that. There is some measuring. But if we were treating computer security as if it was a science, we'd be able to answer questions like, if I have firewall, I am N percent more secure than if I don't. Obviously we can tell right away that that is a ridiculous premise with the way people are doing computer security, because I'd have to say, well, the firewall is configured in the following way. My network's running the following operating systems. Blah, blah, blah. I'd have to throw 50 or 100 pages worth of conditionals on there. And that 30% number that I just pulled out of thin air and threw at you is completely meaningless. So we need to get that back in. I think science in computer security would be nice. At this point, we really have none. Most of the industry is enamored with attractive-sounding b.s., where somebody pulls a number out of thin air and throws it out, and then other people use it and use it ad infinitum.

We've all probably heard the attractive-sounding b.s., 80% of attacks come from the inside. You've heard that. I've heard that. Hell, I've even said it. Well, one of the things to think about there is that if we decompose that, as if we were going to actually try to measure 80% of attacks coming from the inside, it falls apart pretty quickly. How do you define an attack? 80% of what comes from the inside? Is it successful attacks or is it unsuccessful attacks? Is it attacks against critical systems? Is a port scan an attack? Is a telnet to getting a login an attack?

Suddenly we discover that these questions, which would allow us to actually pin some kind of a reality on the 80% figure, those questions aren't answered. So where did that 80% of attacks come from the inside number come from? Well, someone pulled it out of his butt at a conference and threw it over a microphone, and people said, ooh, that sounds really good. And it started getting quoted. And it's gotten quoted, and it's become part of the oral tradition. It is completely bogus.

It would be nice to try to be able to do something about that. I don't think there's anything we can do without being able to quantify things a whole lot more. And in order to quantify things a whole lot more, we would have to be able to standardize them. But this notion of being able to throw out 80% is very attractive to pointy haired suits. And it spawned this entire ideology that I hope to do an entire couple of sessions on, called risk management. The idea of risk management is that you're going to try to play some sort of probability game so that you can go to your bosses and you can say that, "Well, we've identified that having a firewall reduces the likelihood of a blah, blah, blah failure happening by 30%. The firewall costs $10,000 and the thing at risk costs $150 million and, therefore, it's cost justified. Give me my money."

Really what's going on with this risk management notion is that you're taking these wild guesses that sound really good, and you're sticking them on a spreadsheet, or more often you're pulling them out of your butt and pinning them onto a spreadsheet. It's giving you this complete misinformation. We've all heard about spreadsheet math and GIGO -- garbage in, garbage out -- that's basically what we're talking about. The real truth about what's going on with this risk management stuff is that we security practitioners have done such a bad job of being able to quantify the effectiveness of systems that we have to go to managers and essentially give them attractive sounding lies in order to get budget to do the things that ought to have been done in the first place. So how do we do that? Lots and lots of fudge factors!

But if you're talking about anything where you've got probabilities, and you have an unknown, your probabilities become meaningless. The whole notion of risk management is that you're going to take these probabilities and you're going to try to somehow factor them into something that makes sense. Really, the value of the exercise of risk management is that it forces you to sit down and build a list of what's important to you, what are you concerned about, what are you afraid of happening, and then what things should you fix in approximate order. If it makes you feel emotionally comfortable to associate some kind of numbers with that, just so that you can order things, that's fine. But don't b.s. yourself into thinking it's science. We'll come back to risk management someday in the future.

Another problem, another piece of attractive sounding b.s., that's going on in the industry is this notion of penetration testing.

Now I don't intend to attack penetration testing head on in this session. We'll get to penetration testing later on. Hopefully I can do a debate with a penetration tester in a later edition of this. But the idea of penetration testing is to try to determine the quality and the quantity of an unknown quantity, using a constantly varying set of conditions. So basically what you're doing is you're saying, I don't really know anything about the actual security of my network. So what I'm going to do is I'm going to hire someone to do the kinds of things that a hacker would do. Now the person that I'm going to hire is also an unknown quantity. I think he's smart. But I have no way of knowing if he knows as much as the hackers do. I have no way of knowing if he knows as much about my network as I do. I don't really know what tools he's going to use. And we're going to take this unknown person, and we're going to have them do an unknown series of things against an unknown network. And the result is going to be that they're going to break into, or try to break into, an unknown set of machines. From that we're going to determine what? I don't know. That's the problem. And you don't know either.

Gary McGraw likes to refer to this as the "badness meter." I prefer to call it the "suckometer." The suckometer is what you get back from the pen test. It's a scale that goes from, on one side, your network sucks. We were able to trivially walk in and break into everything. To, on the other side, you know. So between you don't know and everything sucks, you're left with what? Nothing particularly valuable.

The value of being able to say "everything sucks" is that you can go into a manager who's got his head in the sand and say, "pull your head out of the sand. Our penetration testers were able to walk into our critical systems and a hacker could too. So it's time to do something about it."

But the converse is not true. If the pen testers are not able to trivially walk into the systems, it doesn't mean your network is secure. It simply means that those pen testers weren't able to walk into your systems trivially. The other problem with pen testing is that it puts you in an adversarial condition against your own pen testers. You've paid these guys a lot of money on the assumption that they know how to hack into systems. If they fail to hack into your systems, they don't look good. So they have to succeed, even if it means that they have to cheat in horrible ways. And if you read the annals of pen testing literature, you'll find lots and lots of instances where pen testers have bumped up against security at some client site and had to resort to cheating tactics like leaving USB key fob drives on the sidewalk outside of the office with an auto run program configured to install a root kit. That, I suppose, is a penetration test. It shows that people will plug drives into things that they probably shouldn't. But for that matter, where does that stop? If you want to prove that people can be fooled eventually, why don't you simply show up with a team of people with machine guns and demand passwords? It's a certain point. Ooh, our penetration test succeeded.

So I think penetration testing has some serious problems.

Let's talk about another serious problem. The other problem is security surveys. Now, my degree is in psychology, and as part of graduating, I had to sit through several semesters of statistics. It's something that's absolutely necessary as part of your psychology program at most major universities. One of the first things that we learned in statistics was this notion of what is called a self selected sample. A self selected sample is a circumstance in which you push out a poll. And you say, if you're stuck on a bus and you've got nothing else to do, then fill out this poll. We'd love to know what your thoughts are. And by the way, when you're filling this poll out, you can tell us whether you're a CEO or a janitor or whatever your job title is. Then we take all these results and we pull them together, and we get well, something. We don't know what we got.

The most egregious examples of self selected samples that I've seen have been things like the National Rifle Association runs a poll about, "Do you support gun control?" They run the poll in the NRA magazine and were shocked, I'm shocked, I tell you, to find that a significant percentage of NRA members are anti gun control. Self selected samples are a big problem. If you read the literature around statistical experiments you'll find that there are any number of fascinating cases where people have produced studies that wound up measuring something completely different from what they thought they were actually measuring. Probably the most famous example I can think of was a study that was done early in the days of the web, in which they discovered that a significant percentage of the people who responded to the sample were unemployed housewives who didn't have anything else to do. So they decided to take this survey. Of course, the survey was about IT executive management, and what they were getting was housewives who were just simply checking the "I am the chief security officer" or "I am the CTO" slot.

The other problem with the current state of affairs in doing these security surveys is that they're almost always sponsored by somebody who has got an agenda. You've either got things like the CSI/FBI survey. Obviously, CSI is an organization that promotes computer security and has an agenda in promoting computer security. Then you've got the FBI, which is paying for it, which has an agenda in promoting the FBI's budget on cyber security. They want to be able to say this is a big problem so they can get more agents to work on cyber security. It's a jobs program. So this is a big problem. There have been even more egregious examples of these kinds of slanted surveys. And I think if we're going to bring science into computer security, if we're going to really get some reality in this field, we need to be a little bit more careful about these kinds of surveys and what do they mean. The other thing to be very aware of is that when somebody like me pokes these surveys, you come along and you go, "that's a self selected sample. It's got absolutely no validity." The usual response from the survey practitioners is, "Yeah, we know we've got sampling bias to worry about. But these numbers are better than nothing." Now if someone ever tells you that b.s. is better than nothing, be careful. OK?

Internet security s tatistics are bogus. We need to get people to think a little bit more carefully. Because we security practitioners, when we throw out and we accept these bogus statistics, what we're doing is we're playing voodoo. We're being witch doctors. And when we start being witch doctors, we have the potential for being unmasked as such. If we're doing science, if we're really being careful about our numbers and we're actually trying to ground them in reasonable metrics, and someone comes along and says, I noticed an experimental flaw in how you did that survey.

The right thing to do is to go, "oops. I screwed up." Publish a retraction and try to get it fixed. Not to go, "well, sure, yeah, it's b.s., but we knew it was b.s. But we got our budget for the quarter increased by 10%, which is what we really blew this b.s at our CEOs for anyway. So let's just not talk about it."

Where I stand, is that computer security right now looks more like a cargo cult than anything else. We practitioners have been playing this game for much too long where we say, you know, you need a firewall. And people put a firewall in, and they plug it in backwards, and they leave the thing with all these open rules. But they've got a firewall, so they've accomplished that piece of the mission. Then, well, you need an intrusion detection system. So they put the intrusion detection system on the network and they don't look at the results from it. And then, ooh, you need a penetration test. So the pen tester comes and charges a lot of money and runs a Nessus scan and writes a report. The intrusion detection system detects the Nessus scan running and logs a bunch of stuff to its console that nobody looks at. The firewall blocked some of it. And all this stuff just goes in the toilet. It's a complete waste of time. We'll be talking about that. That's going to be the underlying theme for at least the first year, if we make it for a year, of this podcast.

So what do we do? What I'd like to do is outline the basic underlying laws of computer security. They're subjective. But what I'd like to try to do is frame them in terms of defining how the properties of systems behave in the macro level. So let's go on.

Trustworthiness and Trust are Not Connected

The first rule is that trustworthiness and trust are not connected. What I mean by that is that the amount of trust that you place in a system has, or may not have, anything to do with whether or not that system is worthy of that trust. So people will talk about, "oh, that's a trusted operating system." I like to be very careful about the use of the word trust and the use of the word trustworthiness.

For example, Windows is a trusted operating system Lots of people trust it. Is it trustworthy? Enough said about that.

Now I'm not here to bash on Windows. All the operating systems that are currently being fielded have vital flaws. But the definition of "secure" is that a system is "insecure" if it's not worthy of the trust that's placed on it. So if I place an operating system in a position of trust, and it turns out not to have been trustworthy, then it is insecure. Now, conversely, when someone comes along and says, "is this secure?" The answer is we don't know, unless we can prove that it's insecure. Now if a system is insecure, we can prove that it's insecure. We can't prove that it is secure. Proving it is secure is the equivalent of proving that it's not insecure.

This is one of the problems with pen testing. Pen testing can prove that a system is insecure. You can pen test until hell freezes over. You cannot prove that a system is trustworthy.

Transitive Trust

Second law. Transitive trust is always a property of trust. Transitive trust, in computer security, is the elephant that's been standing in the middle of the room that everyone has been ignoring for a very long time.

Transitive trust is a huge problem, it's a very difficult problem, and the law of transitive trust basically says this:
If A trusts B, and B trusts C, A trusts C. Usually A doesn't know it. Sometimes A knows it.

If I trust you to use my computer, and you trust my computer to use amazon.com, you may not realize it, but you trust me to use your amazon.com account. That would be a loose example of a transitive trust relationship. By extension, as the number of trusted parties in an event occur increases, the trustworthiness of the entire system goes down in relationship to the total amount of trusting going on.

That's an important point. Because this principle has profound implications for complexity, and has profound implications for how we build virtually everything in computer security. Because if I'm trusting A, and A is trusting B, I am trusting B. And the more of that kind of trust that is going on, sooner or later you get to the point where everybody winds up trusting B. If you've got ten entities and they all wind up trusting one other entity, all the bad guy has to do is find that one entity where things are trusted. Then once they've taken that over, they've taken over everything.

Real examples of this are occurring all over the place in the Internet. A simple example would be DNS, right? Everything trusts DNS to some degree or another. And by the way, if you know anything about DNS, I think you'll agree with me that DNS is not trustworthy. So from a very simple formal standpoint, you could say the Internet is insecure inherently, period.

Security and Convenience are Opposed

Security and convenience are opposed. There are any number of times that you will run into somebody who says, we have a security system that's convenient for the end user. And I submit to you that if someone says that, they're either after your money or they don't understand the problem.

The reason for that is because of this: Convenience means the delegation of trust. In order to make something convenient, what I mean is that the computer is going to do something on my behalf, which is a delegation of trust. A simple example is that I'm trusting my login access rights to a dot R host file. Or I'm trusting my password to an SSH client, or I'm trusting that amazon.com will correctly store my credit card, which, by the way, in all the years I've been buying books on amazon.com, it appears to do. So my trust appears to be well placed. But every place at which you are benefiting from the convenience of an automated system, you're placing your trust in the fact that the automated system is going to work correctly. At the point where someone is able to abuse the trust inherent in the automated system, they are going to be able to do something wrong, like be you or be me or be 10,000 people simultaneously. So by extension, the more convenient the system is, the more trust you are placing in it to act automatically on your behalf. If your browser gives you an icon that you can click that will cause you to go to a web page, that's much more convenient than having to open a browser and type in the correct URL.

It's much, much more convenient, and in return you get phishing scams.

Complexity and Security are Opposed

Complexity and security are opposed. This one's a tough one. This one is going to hurt.

Complexity is an implementation problem. The bigger and more complicated your code is, the less likely that it's going to be secure. And the reason that the bigger and more complicated your code is, the less likely it is to be secure, is because a trust relationship. Subroutines in pieces of software trust each other. Higher level modules in pieces of software trust each other. So if I'm trusting that somebody is storing my data on a network database, that's a trust relationship. The more complex my implementation is in terms of the components that are trusting each other, the more likely it is that one of those components is going to, A, be crucial, and B, be untrustworthy. And then the whole thing becomes insecure at that point.

This is one of the reasons why a lot of us old school security practitioners are quite concerned with the new Web 2.0 model of building what they call--a charming term -- "mash ups". Where you take code from all over the place and you mash it together, and the result is an application which is not very well understood by its author. I think the word "train wreck" might be better. This is going to have some serious implications down the road. Complexity in implementation is always going to be likely to make your system less trustworthy over time because of transitive trust.

The implications of this are extreme. One of them is that if you are trying to make your system more secure by adding more mechanisms to it. You're trying to make it more trustworthy by adding more mechanisms to it, you're doing the equivalent of putting out a fire with gasoline. If you want to make your system more trustworthy, what you actually need to do is remove complexity from it. You need to identify the minimum set of mechanisms that allow the system to do exactly what it is supposed to do and nothing more. At the point where you're trying to build complex systems that have trustworthiness properties, you need to be able to reason about how the trust properties of these components compose together to build a trustworthy whole. That, by the way, is a very difficult problem. A lot of people have spent a lot of time working on it.

Positive Action is More Trust Efficient than Negative Action

Positive action is more trust efficient than negative action. What I mean by trust efficient is that you don't have to get as much stuff right in order to have it work correctly. So positive action is enumerating the things you trust. Negative action is enumerating the things you don't trust.

An example of positive action would be if I listed the 15 applications that are on my laptop. Like PowerPoint, Eudora, Adobe Photoshop, Opera 8, my Macromedia Dreamweaver, blah, blah, blah, blah. It wouldn't be a very long list. I think it would be about 15 or 20 applications of the things that I actually use on my laptop. Negative action would be enumerating all of the 175,000 different viruses and pieces of malcode that I don't want to have run on my laptop. From a standpoint of personal efficiency, I can delegate to an antivirus company to maintain that list of the 150,000 or 200,000 pieces of malware, and I don't have to build that list myself. So from a standpoint of my personal investment in time, it might make more sense to do that. But if what I'm really trying to do is keep my system secure and not have malware, the best thing to do would be to be able to enumerate to my system, this is exactly what I want to have run. If anything else tries to run, please kill it or stop it or pause it, and ask me whether I really want to run that. If we did things that way, a tremendous number of problems would actually disappear. In return for that, we would have a little bit more inconvenience.

One of the things that we need to be able to do in security is think about that tradeoff between convenience and inconvenience that we make constantly, or that we let other people make for us without really thinking about it. By extension of the positive action being more trust efficient, default deny is actually going to always be more effective than default permit. Default deny is the doctrine of, "if I haven't told my firewall to let that thing through, I'm not going to let it through." Default permit is "if I haven't told my firewall that thing is bad, let it through and we'll sort out the consequences on the back end." And of course, the consequences on the back end means that the first new form of attack that I didn't know to tell my firewall to not permit, is going to succeed.

OK. So that's as far as I can get with the first couple of these overall physical laws of computer security.

In order to go further, in order to go further down the path of trying to treat security as something more scientific rather than an art, we need to start measuring the effectiveness of techniques. In order to measure the effectiveness of techniques, we need to start doing demographic studies, I think. We need to be able to make statements, which nobody, to my knowledge, is really trying to make. Along the lines of: this control group used an antivirus technology. This control group used white listing technology. They both used them under static configurations that they did not change. They just used a default install or whatever. At the end of a year, this organization had this many spyware incidents and this organizations had that many spyware incidents. The effectiveness of this product is X versus Y. The problem is the only people who would have any interest in performing that kind of study at this time would be the vendors of one or the other of those solutions. We really can't trust them to keep a level headed approach. We can't trust them to tell the truth, is what I'm saying. So if you think someone is trying to offer a security solution that violates one of these five laws that I've just offered, they're either ignorant or they're lying, or both.

I've actually seen both in the industry. I've seen people who go, oh, you know, this is using is using a signature-less intrusion detection system that never has any false positives and never has any false negatives. Well, you can tell right away that that person is either stupid or lying or both. OK. So this is a start. We have these generations of smart people who are coming along. The generations of smart people who are coming along are trying to build increasingly complex systems out of increasingly complex software. And if you accept my arguments about transitive trust being a property of the complexity of your implementation, the more complex your software gets, over time, the worse it's going to be. One of the arguments that I am making implicitly here is that the next generations of software are going to be worse. They'll be cooler. They'll have better 3D graphics and blinky lights and maybe a prettier user interface. But from a security standpoint, they're going to be worse than the current generation. The current generation is pretty damn bad, and the generation before that was not so hot, and the generation before that wasn't great either. So we've got a big problem. We need to take this seriously.

One of the things I like to joke about is the infocalypse. Now, I'll state for the record that these are bogus numbers. I made these up. But in 2020, the population growth rate of humanity crosses the growth curve of Windows system administration.

Give or take 2020. It might happen in 2019. It might happen in 2021. That's the time at which every man, woman, and child on earth over the age of six becomes a Windows systems administrator. Unless we can solve the problem of general purpose system administration, we've got a really serious issue. And I think the issue of general purpose system administration is solving itself slowly. One of the ways we're seeing that is the upsurge in smart appliances and dedicated devices. Things like iPods, which are essentially an embedded music player. Cell phones, which are an embedded telephony device, that might also have a calendar in it, but doesn't have a copy of Windows that you have to worry about upgrading the software constantly.

So I'd like to summarize and wrap up here. I think we're past the early stage of computer security. The computer security industry and philosophy has been around for not very long. Since the 1960s or so. We've graduated to the point where we understand what we're doing, but we're not doing it right. The dynamics have made it such that in the security market the money is in offering people these simple, ready-made solutions that don't actually work. It's a lot easier to sell somebody the 175,000 antivirus signatures than it is to tell somebody, "Oh, just pull down this drop list and click off the 15 things that you usually want to run. Then you won't have any spyware problems." A lot of the things I'm talking about here are antithetical to the growth, from a financial standpoint or from an importance standpoint, of the computer security industry.

So that's what I had to say today. Thank you!