From: "Paul D. Robertson" <[EMAIL PROTECTED]>
To: Ron DuFresne <[EMAIL PROTECTED]>
Cc: Gene Spafford <[EMAIL PROTECTED]>,
        <[EMAIL PROTECTED]>
Subject: Re: The Morris worm to Nimda, how little we've learned or gained (fwd)

(resending due to firewalls list breakage)

On Tue, 15 Jan 2002, Ron DuFresne wrote:

[snip]

> Yet, the fact that overflows were fairly common knowledge and the attack
> vectors the worm used to enter systems and spread from one system/network
> to another does not genuinely allow any of the admins of the time to hide
> in the guise of stupidity or cluelessness, especially in light of their
> education and learning levels in this light.  Sadly, and this was a shame,
> it appears it took an event of this significance to raise eyebrows in the
> systems security realm.

While overflows are a genuinely solvable problem through tools or
education, I think it's only fair to point out that physical security
vectors haven't changed in quite a while longer than computers have been
around, and the same old attacks still work there too.  Risk tolerance is
both a good and a bad thing.

> > If the Morris worm were to occur today -- and, as you noted variants
> > have been occurring in the guise of CodeRed, et al. -- I would place
> > a large amount of blame with the vendors for doing a shoddy job of
> > producing safer software, and a significant amount of blame on the
> > administrators of the affected sites for not taking better
> > precautions in a known dangerous environment.

I'm pretty sure that none of the major worm outbreaks of the last few
years have attacked a vector for which the vendor hadn't already produced
a patch.  Certainly not NIMDA, Code Red, Adore, 1i0n, Ramen, Poisonbox or
the sadmind worm.  I'm also pretty sure that the BIND based ones had the
shortest time between a patch and widespread infection.

While there is certainly some culpability for having produced
crappy software in the first place, more administrators need to be
hammered for not keeping up to date, and more managers need to be hammered
for chosing products that need to be kept up to date more often.

I've always been curious about the rationale behind keeping MS Office as a
product during the time that something like 85% of the malcode written
targeted it.  Heuristic scanners have "fixed" that risk pretty well, but
there was a time there where it was a significant issue.

> > But in both cases, the primary blame goes to the people who produce
> > and employ malware.   There is no excuse for doing this, and they are
> > quite obviously the primary cause of the damage.

I think the thing that most annoyed me in this space was when the Mayor of
Sneek in the Netherlands offered a job to the "author" of Anna Kournikova-
a kit-generated virus which cost quite a bit to businesses worldwide.

> Agreed, Marcus Ranum, on the firewall wizards list, where this paper
> generated quite a bit of discussion and  a number of side threads
> acknowledged there is plenty of blame and responsibility to go around.
> The point of my paper, sadly was to highlight the fact that for the large
> part, security in the IT industry has never really progressed beyond the
> point of "raised eyebrows", thus the constant circular nature of attack
> and reattack of the same weaknesses and vectors that existed at least as
> far back as 1988.

I've been looking quite a bit recently at parallels to physical security.
The Pentagon, in the spot hit by the airliner, was recently upgraded at a
high monetary cost- with $600 windows and ballistic cloth embedded in the
walls.  It's taken this long to get to stronger walls that still don't
mitiate today's threat completely, but do an astonishingly large ammount
to limit collateral damage.  Walls have pretty much always been a
defensive technology, but I don't see many building administrators running
to upgrade.  One of the major IT costs these days is upgrading, and
often the cure is worse than the risk of the disease.  How we go about
solving that problem, I'm not at all sure.

I guess the point I'd like to make is that we should all hardly be
surprised by the lack of progress.  Historically, it's never been a strong
point.  It mostly requires the very thing we all don't want- governement
intervention to get things to a better point (building codes, auto safety,
etc. were all driven by regulation and the concern of a few not the many.)

> It's interesting to note that today, again, Trusesecure Corp's weekly
> SECURITY WIRE DIGEST, (VOL. 4, NO. 3, JANUARY 14, 2002) noted how
> Micorsoft is attempting to stumble forward in in some regard on the poor
> coding issues prevalent in todays top desktop applications and their OS's:

The way I understand it, they're pushing developer education as well as
using tools to try to detect security flaws.  While I'm no MS fan, I'm not
sure that this could be equated to stumbling and I'm not sure that there's
much else they can do other than fixing the overflow problem in the
compilers and going out of spec for the languages.

The fact that it's taken them this long to get to that point is a damn
shame though.

> > However, I agree with you that we need to re-evaluate the culpability
> > of the software authors, the vendors, and the administrators.    I
> > have been making exactly this point in presentations and classes for
> > at least the last half-dozen years.   It hasn't been well-received in
> > too many venues until very recently.

I'm not sure at this juncture if assigning culpability for the problem
isn't a lost cause, since there are too many hands in that pie, and the
ratio of culpability for the problem changes depending on circumstance.

I think it's significantly easier to assign culpability for not fixing the
problem.

Culpability for the actual attacks is easy to assign, but doesn't mitigate
the damage the way fixing the problems would.

It takes about 15 minutes to remove the external attack vectors most
commonly exploited from a Linux box.  I'm told that it takes about an hour
and a half to do the same for Win2k.  Updating, patching and removing
things isn't that difficult and is easily left at the feet of the
administrator- so that seems to be the most likely place for culpability.

Now, the major counter-argument is that you have to know you need to do
the work.  That's where I think professionalism comes in, and maybe even
some evaluation.  I'm not all that sure why my Sharp Zaurus handheld has
portmapper running on it by default, but turning it off didn't seem to
hurt anything.  It's going to be a consumer device though- and with the
avialability of 802.11b flash cards, it's not going to be all that pretty
to see the fallout of an RPC problem.

> > 3) Your example of the arson victim isn't quite right.   In most
> > cases, an arson victim is not criminally liable unless she did
> > something stupid and criminal to deserve it (e.g., she chained some
> > fire escapes shut).   Instead, the victim may not get full payment

There was recenly a case locally where the landlord was charged
criminally for not having enough smoke detectors and not having
batteries in the one that was installed (though it wasn't an arson case)-
Not changing batteries in smoke detectors isn't generally seen as a criminal
act- and while I'm not arguing for criminalizing admins, I think it's
interesting to look at from a blame perspective, since the landlord
didn't cause the fire (I think faulty equipment like a heater).

> > A better scenario would be for "hack" insurance to begin to become a
> > standard business practice.   Once the actuarial data comes in, the
> > companies set a standard premium.   They may give a discount of 30%
> > if there is a firewall, a 15% discount if the system is based on
> > FreeBSD+Apache, and a 75% discount if the security administrator has
> > a CS degree from Purdue. :-)    Meanwhile, the same company may set a
> > 25% penalty (extra premium) if the system is Windows-based, a 200%
> > penalty if it is running IIS, and there is a clause that there is no
> > payout unless there is evidence that all patches were present and
> > timely.     Under this kind of scenario, market pressures would tend
> > to lead to better practices by the vendors *and* the users.   That
> > would be a better solution than the government regulation you
> > suggest, although I am not hopeful it will happen any time soon.
> >

I'm not so convinced that it would lead to better practices- I think less
effort would be placed on paying for security that didn't bring down the
premium, and folks might even pick vendors who didn't produce a lot of
patches so that they'd met the letter of the contract.  Also, the "we have
insurance" mindset might bring up the same issue which was raised on the
-wizards thread, where the threat rate for having seatbelts, airbags, etc.
raised the speed and evened out the statistics anyway.

Getting past the main actuarial risks isn't all that difficult, and may
actually lower standards in some areas, since premiums account for having
very crappy vehicles on the road.  Lots of people would rather pay higher
monthly premiums for worse vehicles instead of more up front for a better
one.

Having insurance tends to remove some level of responsibility for damages,
since "the insurance will take care of it."

> > <http://www.cstb.org/web/pub_cybersecurity>.  And here are some
> > comments I have made before Congress about the shortage of security
> > professionals:

Do we need more security professionals as badly as we need more IT
professionals who are aware of security issues?  Is there some
increased danger in that also- sort of like garrisoned troop problems, or the
militia not fighting a war problem?

Paul
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
[EMAIL PROTECTED]      which may have no basis whatsoever in fact."


_______________________________________________
Firewalls mailing list
[EMAIL PROTECTED]
http://lists.gnac.net/mailman/listinfo/firewalls

Reply via email to