Hi. Someone recently passed along your essay (I don't subscribe to the firewalls list). There were a couple of comments I wanted to make.
1) You quoted Ian Goldberg's 1995 article where he stated that buffer overflows were "pretty new" in 1988. This is not true. Buffer overflows were used to compromise security on systems in the 1960s and 70s. An early paper explained how Robert Morris's father broke into an early version of Unix by overflowing the password buffer in the login program many years before 1988 (I'm sure the younger Robert was familiar with that paper, too). Many earlier papers also described buffer overflows. Unfortunately, we have a lot of people who are working in security with various levels of claimed expertise who have little or no knowledge of the history or underlying principles of what they are doing. (And no, that is not intended to make any suggestion about Mr. Goldberg -- I do not know him, nor do I know his background. I'm reacting to the quote and my knowledge of other "experts" in the field.) 2) The comments I wrote in 1988 applied to the Internet arena of 1988. There were no significant viruses, worms, root kits, or the like. There was no WWW. There was no history of widespread computer abuse. The majority of systems were running a Unix variant. Pretty much every system administrator of the time had a college degree, usually in computing or a related science. That was the context of my comments at that time that it was not appropriate to blame the administrators for what happened. I still believe that, in that context. I don't believe it was appropriate to blame the OS authors, either, although there was some responsibility that they bore for their sloppy coding. Now, if we fast-forward to today's computing arena. There are about 65,000 viruses and worms (with over 95% of them for Microsoft products). There are literally hundreds of rootkits, DOS kits, and break-in tools available on the net. The WWW reaches hundreds of millions of people. We have a decade+ history of significant, public break-ins. The majority of systems in the world are running a very buggy, bloated OS descended from a standalone PC monitor program. Typical system administrators (and many security administrators) have no training in computing, let alone security. If the Morris worm were to occur today -- and, as you noted variants have been occurring in the guise of CodeRed, et al. -- I would place a large amount of blame with the vendors for doing a shoddy job of producing safer software, and a significant amount of blame on the administrators of the affected sites for not taking better precautions in a known dangerous environment. But in both cases, the primary blame goes to the people who produce and employ malware. There is no excuse for doing this, and they are quite obviously the primary cause of the damage. However, I agree with you that we need to re-evaluate the culpability of the software authors, the vendors, and the administrators. I have been making exactly this point in presentations and classes for at least the last half-dozen years. It hasn't been well-received in too many venues until very recently. 3) Your example of the arson victim isn't quite right. In most cases, an arson victim is not criminally liable unless she did something stupid and criminal to deserve it (e.g., she chained some fire escapes shut). Instead, the victim may not get full payment from an insurance policy, and that is the penalty for not keeping current with the necessary protections. This is similar to what happens when your car is stolen -- you are not charged in criminal court if you left the key in the ignition, but you may not get the full payment for the car from your insurance company, or your future premiums could be doubled. Imagine Joe Clueless is running a Windows box with no patches and no firewall, has no training in security, and still hooks his system up to the network. If his system is hacked (and it will be, perhaps in a matter of hours), he is still a victim. Whoever breaks into his system, or whoever authored the virus that corrupts his disk, that is the person who committed the crime and should be prosecuted. But is Joe blameless? Under law in most western nations, he is probably not criminally liable. He may be stupid, but that isn't a crime. He may be naive, but that isn't a crime either. If he has insurance, he may not get a full (or any) payout. Or if has no insurance, he pays another kind of penalty -- he loses his data. So he does pay a price. And if Joe has a good lawyer who is persistent and can convince a jury that the vendor was negligent, then maybe the vendor will pay, too. A better scenario would be for "hack" insurance to begin to become a standard business practice. Once the actuarial data comes in, the companies set a standard premium. They may give a discount of 30% if there is a firewall, a 15% discount if the system is based on FreeBSD+Apache, and a 75% discount if the security administrator has a CS degree from Purdue. :-) Meanwhile, the same company may set a 25% penalty (extra premium) if the system is Windows-based, a 200% penalty if it is running IIS, and there is a clause that there is no payout unless there is evidence that all patches were present and timely. Under this kind of scenario, market pressures would tend to lead to better practices by the vendors *and* the users. That would be a better solution than the government regulation you suggest, although I am not hopeful it will happen any time soon. Your might find this of interest: <http://www.cstb.org/web/pub_cybersecurity>. And here are some comments I have made before Congress about the shortage of security professionals: <http://www.cerias.purdue.edu/homes/spaf/misc/edu.pdf> (1996) and <http://www.cerias.purdue.edu/homes/spaf/house01.pdf> (2001). Cheers, --spaf _______________________________________________ Firewalls mailing list [EMAIL PROTECTED] http://lists.gnac.net/mailman/listinfo/firewalls