Note: I have had several people contact me to point out that execution control does not solve the entirety of Windows' run-time security. I know that. (See here for a balanced discussion of the issues) The only thing that could "fix" Windows would be for it to not be Windows anymore. Execution control such as is described in this article will not protect against executables that are subverted through downloaded code (buffer overruns, etc) or scripted attacks running in one of Windows' many too-powerful interpreters. In fact, there are so many effective attack vectors into Windows that it may be pointless to try; on the other hand - the idea of preventing new stuff from downloading and executing has some value. There is very good resource here comparing various host prevent/block whitelist/blacklist agents.
For a number of years - about twenty - I've been saying that "default permit" security is stupid. Basically, you're adopting the approach that "everything is allowed" and then trying to identify the things that are known to be dangerous, in order to block them. We've seen this approach used in virtually every area of computer security, and it has been a failure every time. Before firewalls became popular, a lot of organizations used router filters at the edges of their networks to block "bad" applications like rsh - but eventually that approach gave way to firewalls with a "default deny" policy: there were too many "bad" applications to enumerate. Intrusion "prevention" systems take the "default permit" approach, as well. The final bell hasn't run for them (yet) but if we look a little bit into the future it doesn't take a rocket scientist to see what the end-game looks like. It looks like antivirus. Antivirus is the absolute pinnacle of "default permit" in action and it's been sliding down a slippery slope of disaster since the 1980s.
The security model of execution on Windows has never been conducive to security or reliability. "I know!" they say, "let's let anyone run anything that they can be tricked into putting a mouse-pointer on!" and - to make matters worse - "How's this for a cool idea? Let's allow any process to load a device driver into the operating system, where it has complete access to, well, everything." And people find it surprising that such an execution environment is full of hackers, spyware, viruses, and trojan horses? Frankly, it's a miracle that it works at all. Enter antivirus. The antivirus security model is to let everyone run anything they like, but it will attempt to quickly identify every known virus, trojan, or whatever. The obvious problem with this is hardly worth mentioning - a custom attack aimed at a single victim will always get through(1) - but there are more subtle problems with the idea. I had dinner a few months ago with Gene Kaspersky (of Kaspersky Labs, an antivirus product company) and he was telling some amazing stories about how the malware writers are developing "noise" viruses to release before they release the "real" ones - in hopes of overloading the antivirus companies' ability to identify them and develop signatures. Now, I'm sure Kaspersky's staff are smart and fast, but there's no way that meat, bone, and neurons can work as fast as a computer. They are doomed to lose. The antivirus writers have hundreds of thousands of signatures in their databases, which must be checked before Windows can be allowed to begin executing your program. Why not just take all that expensive processing power you paid for and do something stupid with it like look through a database every time you start a program? That's going to make your machine feel really snappy!
Back in the "old days" of the mainframes, the execution model was a lot different. If you had a piece of code you wanted to run on the system, you got permission from the administrators to install and run it. Or, if you were considered a reasonably clueful user, you installed it yourself in your own area of the system and ran it there - where you could only hurt your own stuff. Not surprisingly, they didn't have problems with viruses. The administrators of production systems just installed the applications that were needed, and authorized, and it wasn't possible for Joe in Sales to install some spyware-infested file-sharing application.
For the last couple of years I've been looking for a neat solution to the whole problem. Some kind of execution control that would take me back to the heady days of the mainframe: I want to be the only person who can authorize what is or is not allowed to run on my computer. It seems pretty simple, really. Rather than listing the 183,817,283 bad things I don't want to run on my system, I want to list the 13 or so things I want to allow. How hard is that?
Then, something happened in 2005, that made me decide it was time to get serious. Sony released music CDs containing a trojan horse program to enforce their DRM.(2) That was stupid enough, but what was unforgivable was the antivirus companies' reactions. Basically, they didn't want to do their job and defend my system - they pimped my computer out to Sony because, well, Sony is bigger and more important than their customers. In 2005 I let my subscription to Norton Antivirus run out, and I've been experimenting with alternatives ever since. Besides, I really don't like the way the antivirus guys used to have their hand in my pocket every year. It's part of the changing model of software sales - and it's only going to get worse - but I like the idea of being able to set my computer up and have it pretty much work without my having to throw money at it until I'm good and ready. In a mood of extreme curmudgeonliness I swore a mighty oath that I was done with antivirus for good!
Because Windows is such a crufty little toy operating system, I wind up reinstalling it about every year. On my laptop, it's sometimes as often as every 6 months, depending on what networks I plug it into, what buggy code "auto-patches" itself onto my machine, etc. Now, whenever I reinstall my laptop, I use it as a testing platform for execution control techniques.
So far, I have tried three things, and the first two were complete wash-outs.
Much to my surprise, Windows XP Pro has execution control capabilities built into it! I dished out my money for a copy of "Pro" instead of "Home" and eagerly installed it. Before I went to the trouble of installing all my applications, I started playing with the control capabilities.
If you open Control Panel, then stumble down to "Administrative tools" and (where else) "Local Security Settings" there is a sub-menu hidden in there for execution control.
Just the amount of time I had to spend digging around to find it was enough to make the hair on the back of my neck stand up. After about a half hour of fiddling around, I managed to (I thought) tell Windows that it was allowed to execute anything from C:\Windows and C:\Program Files\Photoshop and everything else was supposed to be denied.
Makes you long for the days when Microsoft used to steal user interface ideas from Apple, huh? That's a "deny" rule that I just installed on E: - my "temp" directory, to block execution of files from that directory.
I was a bit heartened to see that you can "export" the additional rules list into a comma-delimited file. "Aha!" thinks I - I will just write a perl script that makes a list of all the executables on my system, then I'll review the list and punch it in. Um. No. The retards at Microsoft give you an "export" option, but no "import" option .
Finally I just bit the bullet and turned on a "deny all" rule for everything except C:\Windows and places I knew I'd installed software. Then, when I tried to do open Photoshop, it failed anyhow. When I tried to open Control Panel to fix the problem, that failed also. I booted my machine with OpenBSD and zeroized the hard drive with 'dd(1)' and reinstalled Windows. Maybe someone with the patience of a saint could get this mass of bollocks to work correctly. That's not me.
Conclusion: Microsoft must have let a couple of summer interns write their security interface and execution control system. It makes sense; security never has been important in Windows.
I was bemoaning my Windows execution control experience when someone mentioned two packages intended to manage white/blacklisting: PrevX and Faronics' AntiExecutable. I priced them both out and though that, for something so simple, Faronics was over-priced. I bought PrevX and installed it.
For a year, I had PrevX on my laptop, and it appeared to function flawlessly. Whenever I installed an application I usually got a popup or two asking if it was OK to modify thus-and-such registry entry or whatever. I like the fact that it appeared to be unobtrusive and it appeared to be working. One aspect I didn't like particularly much was that it periodically needed to update its rules. "Huh?" Why would a program whitelister need rules? It seems that PrevX keeps a database of good checksums for well-known applications, and takes a white/black/grey-list approach. So far, so good.
I would normally have a nice screenshot of PrevX's user interface here, to keep you from nodding off while you were reading. It's a very pretty user interface. Unfortunately, PrevX has been banished from my computers. Why?
Suddenly, about 2 months ago, something rather odd happened. PrevX stopped working properly. Whenever it tried to update, it failed and gave me a cryptic error suggesting I contact support. When I emailed them I got back a terse response that I should purchase the most recent version of the software. Their site had a link to purchase the new version, but I discovered to my horror that it's now being sold as an annual subscription. In other words, the folks at PrevX have realized that their business model won't work unless they've got their hand in your pocket every year, just like Symantec, and McAfee, and the other antivirus companies. That was when I got a wave of paranoia, anyhow. What would keep PrevX from pushing out a white-list entry for Sony's next DRM trojan horse, if Sony asked them nicely? And, if so, how would I know?
The icing on the cake came last week. I was plugged into a public network for a number of hours and somehow my machine got taken over by something. I started getting weird errors out of lsass and extreme flakies where the operating system would hang when I took it out of hibernation. I booted my machine with OpenBSD and zeroized the hard drive with 'dd(1)' and reinstalled Windows.
Conclusion: I hate software companies that sell me something, then make it stop working and try to charge me more money to fix it. PrevX: you suck. I want my $19 back!
Last week, I reloaded my laptop and went googling for "free execution control windows xp whitelist" and invested a few hours in researching. I finally decided on ExeLockdown from Horizon DataSys, Inc. From all appearances, it is perfect for what I want to accomplish. It's an extremely small application, it doesn't rely on a knowledge-base from anyplace else, and it's got a simple user interface. I spent an hour or two testing it and immediately installed it on all of the Windows machines on my network.
(That's it! The entire user interface! Real programmers put everything on one panel.)
There are a few things about this program that immediately made me fall in love with it. First off, when you're running as an unprivileged user, you can't update its rules base by just clicking "OK" to something. But, as an administrator, you can choose to "allow" or "allow and add to permitted program list". When you try to do execute something that has been blacklisted it gives you a pop-up:
And if you choose to allow the execution to continue, it asks for its management password and gives you the choice of adding the executable to the whitelist.
This is wonderful because it gives me the capability I have been looking for since I started this process: I can configure the machine and tell it "default deny everything" and then, as I run applications, I can dynamically add them to the authorized list or allow them to run once. That nirvana does not appear to be 100% practical - when I tried to ask Exe Lockdown to deny C:\Windows it warned me that Windows would "most likely not start correctly." I suspect that's a nice way of saying "crash and burn." I have, however, been able to blacklist C:\Windows\temp (Praise be!) and my other temporary directories. A good way of identifying where temporary files get downloaded is to ask Windows to so a search for all files that are newer than whenever you installed your machine, on a system that has been up and running for about a week.
Obviously, there is a potential problem if I download a piece of code from someplace, and authorize it, and it contains spyware. I will not be able to prevent the embedded spyware from running, though I'll presumably find out about it if the spyware starts inviting any of its buddies in to the party. If my instance of Windows gets compromised by a worm or a network attack (could that actually happen through Windows' firewall?! Say it isn't so!) the attack will be able to do bad things to my system as long as it is executing code from within the authorized Windows directories.
There's an option on Exe Lockdown that I have not yet played with extensively but it's very promising. You are allowed to 'scan the hard disk for executables' and it will automatically add them to the white list. At that point, you can go through and delete anything that looks undesirable. I allowed ExeLockdown to populate the application list but the result was a bit mind-blowing. Does Windows really need all that crap in order to run? Probably. Yeech! It's horrifying. I cannot give Exe Lockdown a 100% rating because the allow list and deny list are not exportable for edit/save/restore with an external program. It merely scores 99/100.
Conclusion: So far, I am thrilled. I expect that next time I put my laptop in harm's way I'll get a chance to see how this thing works. I've already clicked on a few naughty Email attachments and - oh, joy! They didn't work!
Update: The producers of exe-lockdown have decided to pull the old version and replace it with a new version that includes "enterprise support" (i.e.: it costs money) So... so much for that. I've talked to a couple of Windows gurus who say that they have successfully used Windows' execution control and used enterprise management tools to push out settings, etc - and are very happy with the results. I may give the Windows approach another shot. Meanwhile, I'm still hoping for someone to offer a decent desktop operating system that has a ring execution model and a runtime model that wasn't designed by summer interns.
I've always thought that a distributed execution control tool for businesses would be extremely useful. Every time I've brought it up, the push-back has been immediate and forceful, "It would be unmanageable!" "Our users would get around it!" etc. I find that fascinating, frankly. Manageability for such a system wouldn't be a hard problem; simply group systems into classes and allow various load-outs on various classes of systems, then set up a feedback loop in which the console either proactively or retroactively approves various executables for a given class. None of this is rocket science - in fact the old Sygate firewall did pretty much exactly the same thing with its policy tables and it was a very nice conceptual design. I think the source of the push-back is, frankly, that a lot of people treat their corporate-supplied computers as personal systems: they want to be able to install whatever junk they think would be fun to play with instead of working, keep their local porn-cache, etc. The idea that the corporation might want detailed control over what they are doing is anathema. From a corporate standpoint, I look at most large enterprises and think that their CIO/CTOs aren't doing their jobs: millions of dollars are being spent on IT and there's no accounting for how it's being used? There is ample evidence that a large number of security incidents occur because someone installed something that they shouldn't have. The simple answer is, "Hey, if your job is to run Outlook, Excel and Powerpoint - here's your computer that does that. Go run Diablo II on your home computer, on your own time."
My suspicion is that someday that's where it's all going to end up. If we actually want secure systems, that's the most obvious route. The promise of inexpensive computing at the desktop is what caused computing to move away from the mainframe-and-attached terminal model and toward distributed computing. Really, what that amounted to was a complete loss of control over system administration and the runtime environment. Taking that control back is an obvious and necessary step. Does it bother me? No! Because I can buy my own computer and do whatever I want with it on my own time.
Obviously, this is not a 'complete' solution. Someone could write an exploit that pops a buffer overflow, then modifies an executable image that is authorized by the execution control. To really fix the problem, we'd need all kinds of sensible things that Windows simply doesn't have - like stack/data separation, read-only execute directories, etc. You know, stuff like the real manly operating systems have had since the 1970's...
mjr.
It's misty and rainy and beauiful here at: Bellwether Farm, Morrisdale, PA
Jan 5, 2007
(1) The July, 2006 attacks against the US State Department, which resulted in massive and undetected penetration, were exploited through a custom version of an exploit of a Microsoft Office flaw. In spite of the fact that the State Department computers were running antivirus software the attack succeeded. Worse, since the attack was transmitting data out through the firewall using a novel method wrapped in Secure Sockets Layer (SSL) the State Department's Intrusion Detection Systems (IDS) couldn't detect it, etiher. A custom attack is like having a bullet shot through your head by someone with a sniper rifle. You're dead before you have a chance to update your security posture.
(2) One of the best discussions of this issue, from the security perspective, is Bruce Schneier's. I'm not going to bother repeating any of it here. Read it yourself.