Re: The Morris worm to Nimda, how little we've learned or gained(fwd)

2002-01-18 Thread Paul Robertson

From: Paul D. Robertson [EMAIL PROTECTED]
To: Ron DuFresne [EMAIL PROTECTED]
Cc: Gene Spafford [EMAIL PROTECTED],
[EMAIL PROTECTED]
Subject: Re: The Morris worm to Nimda, how little we've learned or gained (fwd)

(resending due to firewalls list breakage)

On Tue, 15 Jan 2002, Ron DuFresne wrote:

[snip]

 Yet, the fact that overflows were fairly common knowledge and the attack
 vectors the worm used to enter systems and spread from one system/network
 to another does not genuinely allow any of the admins of the time to hide
 in the guise of stupidity or cluelessness, especially in light of their
 education and learning levels in this light.  Sadly, and this was a shame,
 it appears it took an event of this significance to raise eyebrows in the
 systems security realm.

While overflows are a genuinely solvable problem through tools or
education, I think it's only fair to point out that physical security
vectors haven't changed in quite a while longer than computers have been
around, and the same old attacks still work there too.  Risk tolerance is
both a good and a bad thing.

  If the Morris worm were to occur today -- and, as you noted variants
  have been occurring in the guise of CodeRed, et al. -- I would place
  a large amount of blame with the vendors for doing a shoddy job of
  producing safer software, and a significant amount of blame on the
  administrators of the affected sites for not taking better
  precautions in a known dangerous environment.

I'm pretty sure that none of the major worm outbreaks of the last few
years have attacked a vector for which the vendor hadn't already produced
a patch.  Certainly not NIMDA, Code Red, Adore, 1i0n, Ramen, Poisonbox or
the sadmind worm.  I'm also pretty sure that the BIND based ones had the
shortest time between a patch and widespread infection.

While there is certainly some culpability for having produced
crappy software in the first place, more administrators need to be
hammered for not keeping up to date, and more managers need to be hammered
for chosing products that need to be kept up to date more often.

I've always been curious about the rationale behind keeping MS Office as a
product during the time that something like 85% of the malcode written
targeted it.  Heuristic scanners have fixed that risk pretty well, but
there was a time there where it was a significant issue.

  But in both cases, the primary blame goes to the people who produce
  and employ malware.   There is no excuse for doing this, and they are
  quite obviously the primary cause of the damage.

I think the thing that most annoyed me in this space was when the Mayor of
Sneek in the Netherlands offered a job to the author of Anna Kournikova-
a kit-generated virus which cost quite a bit to businesses worldwide.

 Agreed, Marcus Ranum, on the firewall wizards list, where this paper
 generated quite a bit of discussion and  a number of side threads
 acknowledged there is plenty of blame and responsibility to go around.
 The point of my paper, sadly was to highlight the fact that for the large
 part, security in the IT industry has never really progressed beyond the
 point of raised eyebrows, thus the constant circular nature of attack
 and reattack of the same weaknesses and vectors that existed at least as
 far back as 1988.

I've been looking quite a bit recently at parallels to physical security.
The Pentagon, in the spot hit by the airliner, was recently upgraded at a
high monetary cost- with $600 windows and ballistic cloth embedded in the
walls.  It's taken this long to get to stronger walls that still don't
mitiate today's threat completely, but do an astonishingly large ammount
to limit collateral damage.  Walls have pretty much always been a
defensive technology, but I don't see many building administrators running
to upgrade.  One of the major IT costs these days is upgrading, and
often the cure is worse than the risk of the disease.  How we go about
solving that problem, I'm not at all sure.

I guess the point I'd like to make is that we should all hardly be
surprised by the lack of progress.  Historically, it's never been a strong
point.  It mostly requires the very thing we all don't want- governement
intervention to get things to a better point (building codes, auto safety,
etc. were all driven by regulation and the concern of a few not the many.)

 It's interesting to note that today, again, Trusesecure Corp's weekly
 SECURITY WIRE DIGEST, (VOL. 4, NO. 3, JANUARY 14, 2002) noted how
 Micorsoft is attempting to stumble forward in in some regard on the poor
 coding issues prevalent in todays top desktop applications and their OS's:

The way I understand it, they're pushing developer education as well as
using tools to try to detect security flaws.  While I'm no MS fan, I'm not
sure that this could be equated to stumbling and I'm not sure that there's
much else they can do other than fixing the overflow problem in the
compilers and going out of spec for the 

Re: The Morris worm to Nimda, how little we've learned or gained(fwd)

2002-01-15 Thread Ron DuFresne


 Hi.   Someone recently passed along your essay (I don't subscribe to 
 the firewalls list).  There were a couple of comments I wanted to 
 make.


Howdy Mr. Spafford.  I appreciate your comments and acknowledge your
expertise and experience in these matters.  Thank you, you honor me sir.


 
 1) You quoted Ian Goldberg's 1995 article where he stated that buffer 
 overflows were pretty new in 1988.   This is not true.   Buffer 
 overflows were used to compromise security on systems in the 1960s 
 and 70s.   An early paper explained how Robert Morris's father  broke 
 into an early version of Unix by overflowing the password buffer in 
 the login program many years before 1988 (I'm sure the younger Robert 
 was familiar with that paper, too).   Many earlier papers also 
 described buffer overflows.

Actually, I cited the Trusecure Publications Information Security October
2001 issue specific article titled Chief Cypherpunk who lead the
interview and guided Mr. Ian Goldberg in citing his paper.  I'm sorry I
did not make this clear in my paper.  Yet, I do not argue that by and
prior to 1988 that buffer overflows where a fairly well known area of
concern in host level system security and an avenue of privilege
elevation.

 
 Unfortunately, we have a lot of people who are working in security 
 with various levels of claimed expertise who have little or no 
 knowledge of the history  or underlying principles of what they are 
 doing.  (And no, that is not intended to make any suggestion about 
 Mr. Goldberg -- I do not know him, nor do I know his background. 
 I'm reacting to the quote and my knowledge of other experts in the 
 field.)
 

Certainly, I also wish to convey that my intention was not to of
demeaning your expertise, history, nor knowledge in the security realm. If
offense was taken, I do apologize.

 2) The comments I wrote in 1988 applied to the Internet arena of 
 1988.There were no significant viruses, worms, root kits, or the 
 like.   There was no WWW.   There was no history of widespread 
 computer abuse.  The majority of systems were running a Unix variant. 
 Pretty much every system administrator of the time had a college 
 degree, usually in computing or a related science.
 
 That was the context of my comments at that time that it was not 
 appropriate to blame the administrators for what happened.   I still 
 believe that, in that context.  I don't believe it was appropriate to 
 blame the OS authors, either, although there was some responsibility 
 that they bore for their sloppy coding.
 

Yet, the fact that overflows were fairly common knowledge and the attack
vectors the worm used to enter systems and spread from one system/network
to another does not genuinely allow any of the admins of the time to hide
in the guise of stupidity or cluelessness, especially in light of their
education and learning levels in this light.  Sadly, and this was a shame,
it appears it took an event of this significance to raise eyebrows in the
systems security realm.  


 Now, if we fast-forward to today's computing arena.   There are about 
 65,000 viruses and worms (with over 95% of them for Microsoft 
 products).   There are literally hundreds of rootkits, DOS kits, and 
 break-in tools available on the net.   The WWW reaches hundreds of 
 millions of people.  We have a decade+ history of significant, public 
 break-ins.   The majority of systems in the world are running a very 
 buggy, bloated OS descended from a standalone PC monitor program. 
 Typical system administrators (and many security administrators) have 
 no training in computing, let alone security.
 
 If the Morris worm were to occur today -- and, as you noted variants 
 have been occurring in the guise of CodeRed, et al. -- I would place 
 a large amount of blame with the vendors for doing a shoddy job of 
 producing safer software, and a significant amount of blame on the 
 administrators of the affected sites for not taking better 
 precautions in a known dangerous environment.
 
 But in both cases, the primary blame goes to the people who produce 
 and employ malware.   There is no excuse for doing this, and they are 
 quite obviously the primary cause of the damage.


Agreed, Marcus Ranum, on the firewall wizards list, where this paper
generated quite a bit of discussion and  a number of side threads
acknowledged there is plenty of blame and responsibility to go around.
The point of my paper, sadly was to highlight the fact that for the large
part, security in the IT industry has never really progressed beyond the
point of raised eyebrows, thus the constant circular nature of attack
and reattack of the same weaknesses and vectors that existed at least as
far back as 1988.

It's interesting to note that today, again, Trusesecure Corp's weekly
SECURITY WIRE DIGEST, (VOL. 4, NO. 3, JANUARY 14, 2002) noted how
Micorsoft is attempting to stumble forward in in some regard on the poor
coding issues prevalent in todays top desktop 

Re: The Morris worm to Nimda, how little we've learned or gained

2002-01-15 Thread Gene Spafford

Hi.   Someone recently passed along your essay (I don't subscribe to 
the firewalls list).  There were a couple of comments I wanted to 
make.

1) You quoted Ian Goldberg's 1995 article where he stated that buffer 
overflows were pretty new in 1988.   This is not true.   Buffer 
overflows were used to compromise security on systems in the 1960s 
and 70s.   An early paper explained how Robert Morris's father  broke 
into an early version of Unix by overflowing the password buffer in 
the login program many years before 1988 (I'm sure the younger Robert 
was familiar with that paper, too).   Many earlier papers also 
described buffer overflows.

Unfortunately, we have a lot of people who are working in security 
with various levels of claimed expertise who have little or no 
knowledge of the history  or underlying principles of what they are 
doing.  (And no, that is not intended to make any suggestion about 
Mr. Goldberg -- I do not know him, nor do I know his background. 
I'm reacting to the quote and my knowledge of other experts in the 
field.)

2) The comments I wrote in 1988 applied to the Internet arena of 
1988.There were no significant viruses, worms, root kits, or the 
like.   There was no WWW.   There was no history of widespread 
computer abuse.  The majority of systems were running a Unix variant. 
Pretty much every system administrator of the time had a college 
degree, usually in computing or a related science.

That was the context of my comments at that time that it was not 
appropriate to blame the administrators for what happened.   I still 
believe that, in that context.  I don't believe it was appropriate to 
blame the OS authors, either, although there was some responsibility 
that they bore for their sloppy coding.

Now, if we fast-forward to today's computing arena.   There are about 
65,000 viruses and worms (with over 95% of them for Microsoft 
products).   There are literally hundreds of rootkits, DOS kits, and 
break-in tools available on the net.   The WWW reaches hundreds of 
millions of people.  We have a decade+ history of significant, public 
break-ins.   The majority of systems in the world are running a very 
buggy, bloated OS descended from a standalone PC monitor program. 
Typical system administrators (and many security administrators) have 
no training in computing, let alone security.

If the Morris worm were to occur today -- and, as you noted variants 
have been occurring in the guise of CodeRed, et al. -- I would place 
a large amount of blame with the vendors for doing a shoddy job of 
producing safer software, and a significant amount of blame on the 
administrators of the affected sites for not taking better 
precautions in a known dangerous environment.

But in both cases, the primary blame goes to the people who produce 
and employ malware.   There is no excuse for doing this, and they are 
quite obviously the primary cause of the damage.

However, I agree with you that we need to re-evaluate the culpability 
of the software authors, the vendors, and the administrators.I 
have been making exactly this point in presentations and classes for 
at least the last half-dozen years.   It hasn't been well-received in 
too many venues until very recently.

3) Your example of the arson victim isn't quite right.   In most 
cases, an arson victim is not criminally liable unless she did 
something stupid and criminal to deserve it (e.g., she chained some 
fire escapes shut).   Instead, the victim may not get full payment 
from an insurance policy, and that is the penalty for not keeping 
current with the necessary protections.   This is similar to what 
happens when your car is stolen -- you are not charged in criminal 
court if you left the key in the ignition, but you may not get the 
full payment for the car from your insurance company, or your future 
premiums could be doubled.

Imagine Joe Clueless is running a Windows box with no patches and no 
firewall, has no training in security, and still hooks his system up 
to the network.   If his system is hacked (and it will be, perhaps in 
a matter of hours), he is still a victim.   Whoever breaks into his 
system, or whoever authored the virus that corrupts his disk, that is 
the person who committed the crime and should be prosecuted.

But is Joe blameless?   Under  law in most western nations, he is 
probably not criminally liable.   He may be stupid, but that isn't a 
crime.   He may be naive, but that isn't a crime either.   If he has 
insurance, he may not get a full (or any) payout.  Or if has no 
insurance, he pays another kind of penalty -- he loses his data.   So 
he does pay a price.And if Joe has a good lawyer who is 
persistent and can convince a jury that the vendor was negligent, 
then maybe the vendor will pay, too.

A better scenario would be for hack insurance to begin to become a 
standard business practice.   Once the actuarial data comes in, the 
companies set a standard premium.   They 

Re: The Morris worm to Nimda, how little we've learned or gained(fwd)

2002-01-15 Thread Paul D. Robertson

On Tue, 15 Jan 2002, Ron DuFresne wrote:

[snip]

 Yet, the fact that overflows were fairly common knowledge and the attack
 vectors the worm used to enter systems and spread from one system/network
 to another does not genuinely allow any of the admins of the time to hide
 in the guise of stupidity or cluelessness, especially in light of their
 education and learning levels in this light.  Sadly, and this was a shame,
 it appears it took an event of this significance to raise eyebrows in the
 systems security realm.  

While overflows are a genuinely solvable problem through tools or 
education, I think it's only fair to point out that physical security 
vectors haven't changed in quite a while longer than computers have been 
around, and the same old attacks still work there too.  Risk tolerance is 
both a good and a bad thing.

  If the Morris worm were to occur today -- and, as you noted variants 
  have been occurring in the guise of CodeRed, et al. -- I would place 
  a large amount of blame with the vendors for doing a shoddy job of 
  producing safer software, and a significant amount of blame on the 
  administrators of the affected sites for not taking better 
  precautions in a known dangerous environment.

I'm pretty sure that none of the major worm outbreaks of the last few 
years have attacked a vector for which the vendor hadn't already produced 
a patch.  Certainly not NIMDA, Code Red, Adore, 1i0n, Ramen, Poisonbox or 
the sadmind worm.  I'm also pretty sure that the BIND based ones had the 
shortest time between a patch and widespread infection.

While there is certainly some culpability for having produced 
crappy software in the first place, more administrators need to be 
hammered for not keeping up to date, and more managers need to be hammered 
for chosing products that need to be kept up to date more often.

I've always been curious about the rationale behind keeping MS Office as a 
product during the time that something like 85% of the malcode written 
targeted it.  Heuristic scanners have fixed that risk pretty well, but 
there was a time there where it was a significant issue.

  But in both cases, the primary blame goes to the people who produce 
  and employ malware.   There is no excuse for doing this, and they are 
  quite obviously the primary cause of the damage.

I think the thing that most annoyed me in this space was when the Mayor of 
Sneek in the Netherlands offered a job to the author of Anna Kournikova-
a kit-generated virus which cost quite a bit to businesses worldwide.

 Agreed, Marcus Ranum, on the firewall wizards list, where this paper
 generated quite a bit of discussion and  a number of side threads
 acknowledged there is plenty of blame and responsibility to go around.
 The point of my paper, sadly was to highlight the fact that for the large
 part, security in the IT industry has never really progressed beyond the
 point of raised eyebrows, thus the constant circular nature of attack
 and reattack of the same weaknesses and vectors that existed at least as
 far back as 1988.

I've been looking quite a bit recently at parallels to physical security.  
The Pentagon, in the spot hit by the airliner, was recently upgraded at a 
high monetary cost- with $600 windows and ballistic cloth embedded in the 
walls.  It's taken this long to get to stronger walls that still don't 
mitiate today's threat completely, but do an astonishingly large ammount 
to limit collateral damage.  Walls have pretty much always been a 
defensive technology, but I don't see many building administrators running 
to upgrade.  One of the major IT costs these days is upgrading, and 
often the cure is worse than the risk of the disease.  How we go about 
solving that problem, I'm not at all sure.

I guess the point I'd like to make is that we should all hardly be 
surprised by the lack of progress.  Historically, it's never been a strong 
point.  It mostly requires the very thing we all don't want- governement 
intervention to get things to a better point (building codes, auto safety, 
etc. were all driven by regulation and the concern of a few not the many.)

 It's interesting to note that today, again, Trusesecure Corp's weekly
 SECURITY WIRE DIGEST, (VOL. 4, NO. 3, JANUARY 14, 2002) noted how
 Micorsoft is attempting to stumble forward in in some regard on the poor
 coding issues prevalent in todays top desktop applications and their OS's:

The way I understand it, they're pushing developer education as well as 
using tools to try to detect security flaws.  While I'm no MS fan, I'm not 
sure that this could be equated to stumbling and I'm not sure that there's 
much else they can do other than fixing the overflow problem in the 
compilers and going out of spec for the languages.

The fact that it's taken them this long to get to that point is a damn 
shame though.

  However, I agree with you that we need to re-evaluate the culpability 
  of the software authors, the vendors, and the