Ron�s review of Y2001 experiences in historical perspective comes in times
usually devoted to reflection and - hopefully - learning from history. In
the daily struggle to improve contemporary protection software such as
firewalls, Ron�s contribution is very helpful. On the other side, there
are several ingrediences in any attack: weak systems, ill-advised usage,
inadequate administration, and attackers with often illicit motives.

Ron analyses the experiences of maliciously successful attacks, but what
Ron doesNOT analyse is the fact that almost NO PROGRESS was made since 1988
in the area of network software concerning development of safe (=reliable,
persistent, integrity-preserving etc) software. Such methods are well-known
and applied in areas as avionics and process control, where they are guided
by
related safety requirements adressing architecture, implementation,
distribution
and maintenance of safe systems. In comparison, methods in areas requiring
"security" (essentially confidentiality and secrecy, anonymity,
non-linkability,
non-repudiation, non-deniability etc) are methodically less developped, and
known
methods and models of different security aspects such as Bell-LaPadula,
Clark-
Wilson, Role-Based Access Control et al are hardly implanted in contemporary
security products, including firewalls and all sorts of filtering software
(esp.
including AntiVirus and AntiMalware products), Intrusion Detection and
Response
Systems et al. Consequently, such software only protects against KNOWN
DEFICIENCIES
rather than GENERALLY PROTECTING the integrity, confidentiality,
availability,
persistency and other essential (safety AND security) requirements of
contemporary
application systems.

One other issue which Ron tackles indirectly, in quoting Gene Spafford�s
historical position on user responsibility is that users can hardly
compensate
the insecurity and unsafety of contemporary systems and software. I compare
the distribution of 1988�s CHRISTMA.EXEC (a REXX-script painting an Xmas
tree
on a users screen while sending itself ot other IBM mainframe users asking
"please start me. I will be nicely surprise you") with 2001�s W32/Maldal.C
worm which
emails a "christma.exe" appendix with user support (by clicking on it).

In principle, software and systems layers are so "thick" and hardly
understandable
even for experts that users can only apply the WYSIWYG paradigm: "what you
see is
what you get". Or in its correlated form: "You dont get what you dont see"
(YDGWYDS :-)
But it is impossible to observe from a surface what side-effects the
execution of some
code deeply buried in a driver or system procedure on lowest level has or
generated!
I concur with Gene that users (which are indeed the lusers or ever more
aggregating
system complexity) are the real victims (also in 1988, users could NOT
understand what
the REXX code did, and even experts needed some time to analyse it), and
there are too
many cases showing that even the best available AV/AM software couldNOT help
the
firts victims (although W32/Maldal.C was readiliy detectable by generic
methods!).

As long as those methodical deficiencies govern "software development", my
working
hypothesis for the next years is:

        "The situation will further aggrevate, until a level of incidents is
reached
         that large customers rebel against the producers of such software,
including
         operating and application software as well as security-supporting
software!"

As before, "interesting times" are ahead, and the profession of "security
experts"
(whether self-assigned or properly educated and certified) will have many
cases
to analyse (and to benefit from). In this sense: a happy New Year 2002.

Klaus Brunnstein (January 5,2002)

------------------------- referenced: FW 1.439 /message
4 --------------------------
Message: 4
Date: Thu, 3 Jan 2002 03:03:06 -0600 (CST)
From: Ron DuFresne <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: The Morris worm to Nimda, how little we've learned or gained (fwd)

                        The Morris worm to Nimda
                   how little we've learned or gained

                                      by:  Ron DuFresne
                                             (c) 2001

2001 was a tumultuous year.  Prior to the September 11 airline attacks on
three sites in the United States costing thousands of lives and causing
recessive economies across the globe to hit near depression levels, we
had an influx of security attacks that continues to affect thousands of
systems throughout the world.  First there were the various Code Red worm
variants, and then the Nimda worm, all working their nasty mechanisms in
quick succession, and at some corporate sites brining their systems to
grinding halts, even clogging their bandwidth with relentless torrents of
abusive code seeking new live on uninfected systems elsewhere.

Not long after these events, while reading the October issue of
Information Security magazine put out by Trusecure Publications, which
offered an nice interview with Ian Goldberg, titled Chief Cypherpunk.  The
article was very informative and captivating.  And there was one area of
the interview that really raised our eyebrows here, it was concerning
Ian's 1995 article on Internet Security and commence, the interviewer was
asking Ian how much had changed in the six years since that articles
release, to which Ian asked back "Have you been hit by the SirCam worm?"
The interviewer acknowledged how easy it was for this nasty bit of code to
enter enterprise systems and continue it's bastardly assault across
systems.  Ian responded:

        Right.  Back in 1988 we had the Morris worm caused by a
        buffer overflow, which was a pretty new thing, a really
        cool way to penetrate systems.

The Trusecure interviewer interjected; "An buffer overflows are still a
problem."  Furthering this response again from Ian:

        Right!  Thirteen years later, they still do buffer overflows.
        It's crazy. We've learned nothing.

There is not a security practitioner in the IT industry that could read
this without shrugging, shaking their head in acknowledgment, and
wondering when code writers producing applications might well wakeup and
take note.  It prompted us to review once again Eugene H. Spafford's
analysis of the Morris worm, and it's aftermath.  A document recommended
to all security related folks in the IT industry and administrators in
general, if for nothing more then an eye-opener.

It was surprising to note how little has really changed since 1988 and the
release and cleanup after the Morris worm outbreak, as well as the few
changes that have come to pass.  For one, the Morris outbreak was very
short lived.  Having been released on November 2nd of that year, it spread
very quickly and brought many systems to their knees as is common in worm
outbreaks to this day, yet, the cleanup process was rather short compared
to what we now face, and Mr. Spafford reported that the last effects of
the worm were wiped out by early December of 1988.  Not so now days, the
Internet is much bigger now then it was back then, in what we here like to
refer to as the Internet's teen years.  Now in the early adult years of
the Internet, the affects of a worm with a payload as devastating as the
Morris worm's like those mentioned in our topic and the many others
released in recent times can persist for years.


The lack of skilled administrators and security personnel was cited in
Spafford's review, when it came to the topic of accountability.  This is a
common topic of the security related mailing lists and Usenet groups to
this day.  Mr. Spafford said specifically:

        The claim that the victims of the Worm were somehow responsible
        for the invasion of their machines is also curious. The
        individuals making this claim seem to be stating that
        there is some moral or legal obligation for computer users to
        track and install every conceivable security fix and mechanism
        available. This totally ignores the many sites that run turn-key
        systems without source code or administrators knowledgeable
        enough to modify their systems. Those sites may also be running
        specialized software or have restricted budgets that prevent them
        from installing new software versions. Many commercial and
        government sites operate their systems this way. To attempt to
        blame these individuals for the success of the Worm is
        equivalent to blaming an arson victim for the fire because
        she didn't build her house of fireproof metal.

Outlooks in this area might well be changing in the e-commerce arena that
the Internet is being exploited for by many companies and organizations
these days.  We have governments now working to protect individual
privacy here and in Europe.  And there have been great rumblings in the
security arena about Microsoft's poor certification programs and their lack
of security training.  Yet, all too often, because of the lack of skills
in the administrative area as well as the fact that most IT departments
are so scarcely staffed, responsibility and accountability is often
avoided by those responsible for maintaining the systems that are place in
exposed settings.  It is very reminiscent of the rumblings recently by the
security contractors at many of our countries airports, which mirrors
societies issues of responsibility and accountability for the last 4-5
decades.  This has worked, along with the wishes to make a quick buck from
ISP's and those connecting people across the globe to the Internet, to
make it difficult, if at time near impossible in getting a system
attacking another to be removed from the Net until it is at least fixed
and no longer a threat to others.  This is one of the reasons that some of
the most recent worms have worked because administrators had not applied
patches that would have prevented things like the code red variants and
Nimda from succeeding had 'due diligence' been employed on those systems
exploited by these nasty bits of code.  I guess here I'd like to counter
the Spafford's argument against placing blame on those responsible for
maintaining systems attached to the Internet by likening it more to the
car industries original reluctance to include seatbelts in vehicles.  It
took legislation to finally get compliance in that realm, and it appears
it is going to take much tougher legislation to get the same results in
systems management and network security, once again, look at the airline
and travel industry in light of the September 11 2001 attacks and you
should get a good take on the picture we are painting here.  In the very
least, jobs should be on the line when companies are compromised by code
that could have long been prevented by patching of applications and OS's,
especially when those patches have been widely available and publicly
announced.  Even an arson victim faces penalties if they have violated
building codes that contributed to the disaster forced upon them by a
criminal.  And we have not even broached the topic here of vendor
responsibility...


references:

The Internet Worm Incident Technical Report CSD-TR-933 *Eugene H. Spafford

Information Security October 2001 *Trusecure Publications
----------------------- end of referenced
message ----------------------------------


_______________________________________________
Firewalls mailing list
[EMAIL PROTECTED]
http://lists.gnac.net/mailman/listinfo/firewalls

Reply via email to