> > it only takes one other person, somewhere on the internet, who's just as
> > clever as you are to isolate your hole and develop a remedy.
>
> Sorry, but that's just not true. This isn't like a chess game,
> a mano-a-mano contest of raw brainpower. Security holes are hidden by
> the sheer size and complexity of a system.
this statement ties into a point i wanted to make eariler, but
couldn't segue cleanly.
the covert systems you've talked about are all technically possible,
certainly, but they add complexity. that's one of the fundamental
tradeoffs of software development: additional features mean
additional complexity.
linux, like any large piece of software, involves a fair amount of
complexity in its own right. the standard strategy for managing
complexity is to use a modular design: divide the system into
smaller, well-defined components, and only allow communication between
components through well-defined interfaces. both the components and
interfaces should be small enough that their operation can be clearly
understood by direct inspection of the code.
adding a hidden weakness makes components more complex. adding
features like self-replication and self-healing increase that
complexity. distributing the complexity among several modules
involves either hijacking the logic of the modular interface, or
violating the encapsulation that interface is intended to provide.
unnecessary complexity, violations of encapsulation, and
over-permissive interfaces are all well known targets for improving
software, so people tend to look for them.
i assume you're not talking about a brute-force inclusion, on the
order of #include "secret_hole.h", or something equally visible to
casual inspection. instead, as i understand your argument, you mean
increasing the complexity of one or several components, and weakening
the specificity of the interface definition, to open holes which are
not immediately visible to casual inspection.
in that context, you're going head to head with any other programmer
who wants to work with code from the same component, understands that
specific problem domain, and knows the modular constraints of the
local design. you're trying to promote a weakness of design that
he either won't notice, or won't question.
i'm not saying that such a thing is impossible.. logical proofs of
nonexistence are inherently flawed. my position is that it's harder
to hide things in an environment which encourages code inspection, and
harder still in one which encourages redundant, paralell inspection.
open source is at least as robust in its resistance to security
through obscurity as any other development method i'm aware of.
> > debugging can proceed in paralell, so for every expert at
> > reverse-engineering,
>
> You don't need to do reverse engineering when you have the source
> code.
poor use of terminology on my part. my reference was to the art of
extracting the logical model behind a piece of software, be it
compiled binary, undocumented source, obscurely worded spec, or
whatever. not everyone can read code and tell you what the
underlying design is.
> > there can be ten thousand amateurs doing brute-force combinatorial
> > recompilation of the source
>
> There are, almost certainly, more possible combinations of options
> in an OS than there are particles in the universe. No, that's an
> understatement.
there are lots of software issues which involve numbers bigger than
the particle count of the universe. the canonical method for solving
them is to divide and conquer. heirarchically partition the system
into manageable sets of components. at each level, identify the
combinations which are and are not vulnerable to the attack. break
the ones which are vulnerable down further, and ignore the ones which
aren't. the vast majority of potential combinations will be
elimintated quickly, reducing the workload exponentially. the average
solution time will be n(log n) based on the number of pieces.
> > if people want to find your hole badly enough.
>
> But they don't. They don't know that it exists, they don't know that
> I exist. Looking for potential security holes is dull, unrewarding,
> pedestrian work, no fun at all compared to hacking out a new disk
> optimization program.
i beg to differ.. it takes all types to make an internet. skim
through a few months' worth of archives from alt.2600 for an overview
of the level of effort wArEz d00dz will invest in finding holes, and
that security hackers will devote to closing them. security is
exciting, and lots of people work hard at it.
> > it doesn't matter how good you are, nobody wins a
> > butt-kicking contest against a mongolian horde.
>
> You see any mongols around here? All they knew how to do was kick
> butt, and they're gone, gone, gone.
incompleteness on my part.. the collective population of China /can/
beat a mongolian horde. technically, the mongols won, though.. at
least, their invasion was successful. there just weren't enough of
them to establish a significant lasting change in the local culture,
and they ended up being assimilated within a few generations.
someone else can work out the software analogies in that.
> > judging by the history of the internet, the effective lifespan of such
> > a hole would be somewhere between a day and a week once you've really
> > ticked someone off.
>
>
> That matters not the slightest bit. A single high-visibility, high-
> cost security breach doesn't just hurt the organization whose system
> is compromised; it hurts the OS and its developers forever. Even if it
> is fixed immediately and never happens again! Suppose someone broke
> into a Linux server at CitiBank and stole the ATM Pin file; Linux use
> would drop by 10% globally.
i'm afraid i have to challenge this one.
if CitiBank was materially injured because of a hole in a Linux
server, i personally wouldn't care one way or the other.. i'm not
storing secure data on my machines (and i bank with someone else ;-).
i'll accept a backlash of opinion among enterprise users in the same
space; i won't accept a general backlash among users who don't need
the security, though.
among the enterprise users, who would be tempted to switch, changing
OS is a nontrivial issue. the services currently supported would
have to be duplicated, the data migrated, the users and administrative
staff retrained, SOP rewitten, etc. the OS currently in place,
whatever it is, is most likely the best solution to a complex set of
requirements identified by that user. even a major failure in one
area doesn't invalidate the rest of the selection criteria. i think
most enterprises would rather patch a newly discovered hole than make
the investment to rebuild their network from scratch.
on a complete, admittedly trivial, tangent: i'd have some very harsh
things to say about any network admin stupid enough to put sensitive
information on a server directly accessible through the internet,
regardless of OS. that's what firewalls, encryption, and network
security administrators are for.
> The company hires a crisis-management PR firm and declares that their
> next release will be 100% secure.
oh dear.. tact mode overdrive systems: engage..
on the whole, i don't think that strategy would have much effect in
enterprise space. as i mentioned before, choice of OS involves
multiple selection criteria, and in most cases, candidate selection is
based on the demonstrable ability to meet those critera. projections
for future releases tend not to carry much weight unless they can be
clearly substantiated.
mike stone <[EMAIL PROTECTED]> 'net geek..
been there, done that, have network, will travel.
____________________________________________________________________
--------------------------------------------------------------------
Join The Web Consultants Association : Register on our web site Now
Web Consultants Web Site : http://just4u.com/webconsultants
If you lose the instructions All subscription/unsubscribing can be done
directly from our website for all our lists.
---------------------------------------------------------------------