On Sat, 5 Jan 2002, Klaus Brunnstein wrote:

> Ron�s review of Y2001 experiences in historical perspective comes in times
> usually devoted to reflection and - hopefully - learning from history. In
> the daily struggle to improve contemporary protection software such as
> firewalls, Ron�s contribution is very helpful. On the other side, there
> are several ingrediences in any attack: weak systems, ill-advised usage,
> inadequate administration, and attackers with often illicit motives.

So it is also with the physical world.  Theft, fraud, murder, and the like 
all come down to bad people taking advantage of weakness.  Just like the 
old west though, one of the first things that needs to happen is the 
community needs to WANT order and stop idolizing the bad guys.

> Ron analyses the experiences of maliciously successful attacks, but what
> Ron doesNOT analyse is the fact that almost NO PROGRESS was made since 1988
> in the area of network software concerning development of safe (=reliable,
> persistent, integrity-preserving etc) software. Such methods are well-known
> and applied in areas as avionics and process control, where they are guided

I think that was the point of both Ron's message and also the "circle of 
blame" stuff on -wizards.

> by
> related safety requirements adressing architecture, implementation,
> distribution
> and maintenance of safe systems. In comparison, methods in areas requiring

I'd point you once again to the -wizards thread, the seatbelts and risk 
tolerance posting brings up a logical and well laid rationale as to why we 
have, and will continue to have these problems. 

> "security" (essentially confidentiality and secrecy, anonymity,
> non-linkability,
> non-repudiation, non-deniability etc) are methodically less developped, and
> known
> methods and models of different security aspects such as Bell-LaPadula,
> Clark-
> Wilson, Role-Based Access Control et al are hardly implanted in contemporary
> security products, including firewalls and all sorts of filtering software
> (esp.
> including AntiVirus and AntiMalware products), Intrusion Detection and
> Response
> Systems et al. Consequently, such software only protects against KNOWN
> DEFICIENCIES
> rather than GENERALLY PROTECTING the integrity, confidentiality,
> availability,
> persistency and other essential (safety AND security) requirements of
> contemporary
> application systems.
> 

That's simply because such mechanisms are too difficult to use in every 
day life.  The delta between best practice and average practice is 
widening every day.  

The risk tolerance for people using Win98, for instance is much more 
likely to stay at the "Click cancel to continue" password dialog level as 
long as the choice of security comes at a higher cost than the chance of 
intrusion via that mechanism.  Passwords and Viruses are the top two IT 
related costs- one of them is a protection mechanism and the other a 
malicious attack.  Given that "walk up and log on" attacks are probably 
further down the list (sorry, I don't have current costs for that handy) 
than the cost of providing protection, it's likely that more 
(administratively) costly protections which increase complexity aren't 
likely to be integrated into general purpose computing platforms.  That's 
a shame, because MAC and RBAC do tend to work very well when combined 
(Role Based Access Control fails when you have an accessable role such as 
Administrator or root that trumps all controls and is a *necessary* 
privilege for common system services that aren't part of a TCB.


> One other issue which Ron tackles indirectly, in quoting Gene Spafford�s
> historical position on user responsibility is that users can hardly
> compensate
> the insecurity and unsafety of contemporary systems and software. I compare
> the distribution of 1988�s CHRISTMA.EXEC (a REXX-script painting an Xmas
> tree
> on a users screen while sending itself ot other IBM mainframe users asking
> "please start me. I will be nicely surprise you") with 2001�s W32/Maldal.C
> worm which
> emails a "christma.exe" appendix with user support (by clicking on it).

Indeed, this is one reason why I think user training is mostly futile- the 
large number of people who'll execute *anything* that comes to them is 
astounding.  With one of the recent mass-mailers, someone was regailing 
the tail of a former co-worker who's current employer was overwhelmed when 
after getting a virus warning most of their users executed the attachment 
to see what it would do.

Cynically, I also wonder how much motivation there is for someone to not 
execute something or remember their password when the alternative is to 
not have to do any work for that day.  Whilst the majority of users aren't 
in that vein, it's worth considering what percentage might be.

> In principle, software and systems layers are so "thick" and hardly
> understandable
> even for experts that users can only apply the WYSIWYG paradigm: "what you
> see is
> what you get". Or in its correlated form: "You dont get what you dont see"
> (YDGWYDS :-)
> But it is impossible to observe from a surface what side-effects the
> execution of some
> code deeply buried in a driver or system procedure on lowest level has or
> generated!
> I concur with Gene that users (which are indeed the lusers or ever more
> aggregating
> system complexity) are the real victims (also in 1988, users could NOT
> understand what
> the REXX code did, and even experts needed some time to analyse it), and
> there are too
> many cases showing that even the best available AV/AM software couldNOT help
> the
> firts victims (although W32/Maldal.C was readiliy detectable by generic
> methods!).

The first victims always lose.  That's why rate of spread is important.  
There are times when simply delaying every mail by 60 minutes is pretty 
effective protection.

> As long as those methodical deficiencies govern "software development", my
> working
> hypothesis for the next years is:
> 
>         "The situation will further aggrevate, until a level of incidents is
> reached
>          that large customers rebel against the producers of such software,
> including
>          operating and application software as well as security-supporting
> software!"

The companies won't rebel until there are alternatives.  Instead they'll 
keep grasping for "good enough" patches.  It takes serious fiscal and 
engineering discipline to build something like the Hoover Dam- and 
more importantly massive ammounts of time.  Most companies are public and 
therefore at least on the surface quarterly concerns.  

I'm not all that convinced that it's not more important to drive the cost 
of incidents down to a level where they don't matter than to try to stamp 
them out- at least to be successful in the current and near future.

> As before, "interesting times" are ahead, and the profession of "security
> experts"
> (whether self-assigned or properly educated and certified) will have many
> cases
> to analyse (and to benefit from). In this sense: a happy New Year 2002.

Crime has been with us far longer, and there are more police now than ever 
before.  I doubt that even with the fact that we have the technologies to 
limit exposure, we'll find that many people looking to cure it.  

Look at the market for IDS, look at the market for burgular alarms.  For 
the most part these aren't protective technologies, they're alertive 
technologies that don't work until after the breach has occured.  We could 
certainly solve the windows and doors problems with physical buildings, 
but could we afford to do so everywhere?  Is there even an impetus to?

Heck, VPNs are sold as "security products" when their use pretty much 
forces trust boundary extension that's almost always riskier than not 
having them.  The problems VPNs solve is confidentiality, yet they're sold 
(and happily purchased) as access controls.  People want to do things, not 
to protect things- and if they think (and the odds are on that they'll not 
fall prey) that they'll not be unprotected, then they'll do them.    

Most companies would rather spend enough to (attempt to) make sure it 
doesn't happen to them than to spend enough to (attempt to) make sure it 
doesn't happen at all.  

A final thought:

Firewalls exist because people have given up on host security and having 
host administrators and users do the Right Thing[tm].  Now that we've 
devolved to the point where applications and protocols all pretty much 
aviod a firewall, we're back where we started.  The firewall build a very 
good mechanism for "outside to inside" access control- because that was 
the treat that was feared.  Now we've got "outside to in through a 
necessary vector" and "inside to outside" problems and we either need 
those to change, or we need to change the necessary vectors to be ones we 
can control.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
[EMAIL PROTECTED]      which may have no basis whatsoever in fact."

_______________________________________________
Firewalls mailing list
[EMAIL PROTECTED]
http://lists.gnac.net/mailman/listinfo/firewalls

Reply via email to