Provide a detailed summary of the following web content, including what type of content it is (e.g. news article, essay, technical report, blog post, product documentation, content marketing, etc). If the content looks like an error message, respond 'content unavailable'. If there is anything controversial please highlight the controversy. If there is something surprising, unique, or clever, please highlight that as well: Title: The Six Dumbest Ideas in Computer Security Site: ranum.com The Six Dumbest Ideas in Computer Security There's lots of innovation going on in security - we're inundated with a steady stream of new stuff and it all sounds like it works just great. Every couple of months I'm invited to a new computer security conference, or I'm asked to write a foreword for a new computer security book. And, thanks to the fact that it's a topic of public concern and a "safe issue" for politicians, we can expect a flood of computer security-related legislation from lawmakers. So: computer security is definitely still a "hot topic." But why are we spending all this time and money and still having problems? Let me introduce you to the six dumbest ideas in computer security . What are they? They're the anti- good ideas. They're the braindamage that makes your $100,000 ASIC-based turbo-stateful packet-mulching firewall transparent to hackers. Where do anti- good ideas come from? They come from misguided attempts to do the impossible - which is another way of saying "trying to ignore reality." Frequently those misguided attempts are sincere efforts by well-meaning people or companies who just don't fully understand the situation, but other times it's just a bunch of savvy entrepreneurs with a well-marketed piece of junk they're selling to make a fast buck. In either case, these dumb ideas are the fundamental reason(s) why all that money you spend on information security is going to be wasted, unless you somehow manage to avoid them. For your convenience, I've listed the dumb ideas in descending order from the most-frequently-seen. If you can avoid falling into the the trap of the first three, you're among the few true computer security elite. #1) Default Permit This dumb idea crops up in a lot of different forms; it's incredibly persistent and difficult to eradicate. Why? Because it's so attractive. Systems based on "Default Permit" are the computer security equivalent of empty calories: tasty, yet fattening. The most recognizable form in which the "Default Permit" dumb idea manifests itself is in firewall rules. Back in the very early days of computer security, network managers would set up an internet connection and decide to secure it by turning off incoming telnet, incoming rlogin, and incoming FTP. Everything else was allowed through, hence the name "Default Permit." This put the security practitioner in an endless arms-race with the hackers. Suppose a new vulnerability is found in a service that is not blocked - now the administrators need to decide whether to deny it or not, hopefully, before they got hacked. A lot of organizations adopted "Default Permit" in the early 1990's and convinced themselves it was OK because "hackers will never bother to come after us." The 1990's, with the advent of worms, should have killed off "Default Permit" forever but it didn't. In fact, most networks today are still built around the notion of an open core with no segmentation. That's "Default Permit." Another place where "Default Permit" crops up is in how we typically approach code execution on our systems. The default is to permit anything on your machine to execute if you click on it, unless its execution is denied by something like an antivirus program or a spyware blocker. If you think about that for a few seconds, you'll realize what a dumb idea that is. On my computer here I run about 15 different applications on a regular basis. There are probably another 20 or 30 installed that I use every couple of months or so. I still don't understand why operating systems are so dumb that they let any old virus or piece of spyware execute without even asking me. That's "Default Permit." A few years ago I worked on analyzing a website's security posture as part of an E-banking security project. The website had a load-balancer in front of it, that was capable of re-vectoring traffic by URL, and my client wanted to use the load-balancer to deflect worms and hackers by re-vectoring attacks to a black hole address. Re-vectoring attacks would have meant adopting a policy of "Default Permit" (i.e.: if it's not a known attack, let it through) but instead I talked them into adopting the opposite approach. The load-balancer was configured to re-vector any traffic not matching a complete list of correctly-structured URLs to a server that serves up image data and 404 pages, which is running a special locked-down configuration. Not surprisingly, that site has withstood the test of time quite well. One clear symptom that you've got a case of "Default Permit" is when you find yourself in an arms race with the hackers. It means that you've put yourself in a situation where what you don't know can hurt you, and you'll be doomed to playing keep ahead/catch-up. The opposite of "Default Permit" is "Default Deny" and it is a really good idea. It takes dedication, thought, and understanding to implement a "Default Deny" policy, which is why it is so seldom done. It's not that much harder to do than "Default Permit" but you'll sleep much better at night. #2) Enumerating Badness Back in the early days of computer security, there were only a relatively small number of well-known security holes. That had a lot to do with the widespread adoption of "Default Permit" because, when there were only 15 well-known ways to hack into a network, it was possible to individually examine and think about those 15 attack vectors and block them. So security practitioners got into the habit of "Enumerating Badness" - listing all the bad things that we know about. Once you list all the badness, then you can put things in place to detect it, or block it. Figure 1: The "Badness Gap" Why is "Enumerating Badness" a dumb idea? It's a dumb idea because sometime around 1992 the amount of Badness in the Internet began to vastly outweigh the amount of Goodness. For every harmless, legitimate, application, there are dozens or hundreds of pieces of malware, worm tests, exploits, or viral code. Examine a typical antivirus package and you'll see it knows about 75,000+ viruses that might infect your machine. Compare that to the legitimate 30 or so apps that I've installed on my machine, and you can see it's rather dumb to try to track 75,000 pieces of Badness when even a simpleton could track 30 pieces of Goodness. In fact, if I were to simply track the 30 pieces of Goodness on my machine, and allow nothing else to run, I would have simultaneously solved the following problems: Spyware Viruses Remote Control Trojans Exploits that involve executing pre-installed code that you don't use regularly Thanks to all the marketing hype around disclosing and announcing vulnerabilities, there are (according to some industry analysts) between 200 and 700 new pieces of Badness hitting the Internet every month. Not only is "Enumerating Badness" a dumb idea, it's gotten dumber during the few minutes of your time you've bequeathed me by reading this article. Now, your typical IT executive, when I discuss this concept with him or her, will stand up and say something like, "That sounds great, but our enterprise network is really complicated. Knowing about all the different apps that we rely on would be impossible! What you're saying sounds reasonable until you think about it and realize how absurd it is!" To which I respond, "How can you call yourself a 'Chief Technology Officer' if you have no idea what your technology is doing?" A CTO isn't going to know detail about every application on the network, but if you haven't got a vague idea what's going on it's impossible to do capacity planning, disaster planning, security planning, or virtually any of the things in a CTO's charter. In 1994 I wrote a firewall product that needed some system log analysis routines that would alert the administrator in case some kind of unexpected condition was detected. The first version used "Enumerating Badness" (I've been dumb, too) but the second version used what I termed " Artificial Ignorance " - a process whereby you throw away the log entries you know aren't interesting . If there's anything left after you've thrown away the stuff you know isn't interesting, then the leftovers must be interesting. This approach worked amazingly well, and detected a number of very interesting operational conditions and errors that it simply never would have occurred to me to look for. "Enumerating Badness" is the idea behind a huge number of security products and systems, from anti-virus to intrusion detection, intrusion prevention, application security, and "deep packet inspection" firewalls. What these programs and devices do is outsource your process of knowing what's good. Instead of you taking the time to list the 30 or so legitimate things you need to do, it's easier to pay $29.95/year to someone else who will try to maintain an exhaustive list of all the evil in the world. Except, unfortunately, your badness expert will get $29.95/year for the antivirus list, another $29.95/year for the spyware list, and you'll buy a $19.95 "personal firewall" that has application control for network applications. By the time you're done paying other people to enumerate all the malware your system could come in contact with, you'll more than double the cost of your "inexpensive" desktop operating system. One clear symptom that you have a case of "Enumerating Badness" is that you've got a system or software that needs signature updates on a regular basis, or a system that lets past a new worm that it hasn't seen before. The cure for "Enumerating Badness" is, of course, "Enumerating Goodness." Amazingly, there is virtually no support in operating systems for such software-level controls. I've tried using Windows XP Pro's Program Execution Control but it's oriented toward "Enumerating Badness" and is, itself a dumb implementation of a dumb idea. In a sense, "Enumerating Badness" is a special dumb-case of "Default Permit" - our #1 dumb computer security idea. But it's so prevalent that it's in a class by itself. #3) Penetrate and Patch There's an old saying, "You cannot make a silk purse out of a sow's ear." It's pretty much true, unless you wind up using so much silk to patch the sow's ear that eventually the sow's ear is completely replaced with silk. Unfortunately, when buggy software is fixed it is almost always fixed through the addition of new code, rather than the removal of old bits of sow's ear. "Penetrate and Patch" is a dumb idea best expressed in the BASIC programming language: 10 GOSUB LOOK_FOR_HOLES 20 IF HOLE_FOUND = FALSE THEN GOTO 50 30 GOSUB FIX_HOLE 40 GOTO 10 50 GOSUB CONGRATULATE_SELF 60 GOSUB GET_HACKED_EVENTUALLY_ANYWAY 70 GOTO 10 In other words, you attack your firewall/software/website/whatever from the outside, identify a flaw in it, fix the flaw, and then go back to looking. One of my programmer buddies refers to this process as "turd polishing" because, as he says, it doesn't make your code any less smelly in the long run but management might enjoy its improved, shiny, appearance in the short term. In other words, the problem with "Penetrate and Patch" is not that it makes your code/implementation/system better by design , rather it merely makes it toughened by trial and error . Richard Feynman's " Personal Observations on the Reliability of the Space Shuttle " used to be required reading for the software engineers that I hired. It contains some profound thoughts on expectation of reliability and how it is achieved in complex systems. In a nutshell its meaning to programmers is: "Unless your system was supposed to be hackable then it shouldn't be hackable." "Penetrate and Patch" crops up all over the place, and is the primary dumb idea behind the current fad (which has been going on for about 10 years) of vulnerability disclosure and patch updates. The premise of the "vulnerability researchers" is that they are helping the community by finding holes in software and getting them fixed before the hackers find them and exploit them. The premise of the vendors is that they are doing the right thing by pushing out patches to fix the bugs before the hackers and worm-writers can act upon them. Both parties, in this scenario, are being dumb because if the vendors were writing code that had been designed to be secure and reliable then vulnerability discovery would be a tedious and unrewarding game, indeed