[EMAIL PROTECTED] wrote:

Security is everything you've ever said, plus a process.

If it is secure, it doesn't need a process. So why would security be a
process again? Because of the vendors making "mistakes" and fix it later?

Jimmy Scott

It is a "process" in the same way that "making toast" is a process.
The purchase of a "bread-crisping solution" that is UL-certified to not
set your house on fire is the contribution of the "engineering" and
"product development" stages.  In common usage, using this "solution"
to toast your morning snack will produce crispy bread and will not
produce a howling conflagration.  However, note that it is still very
much possible to ignite your domicile by soaking a rag in lighter fluid,
stuffing it into the bread-toasting slot, and jamming the switch closed
with a butter knife.  For a less extreme example, it _may_ be possible
to cause a fire by leaving a towel too near the toaster while it is
operating, something which is easy to do and all too common.

Having a morning snack and an un-burnt house at the same time, then, is
contingent upon two things - possessing a toaster of adequate quality,
and using it properly.  You don't get to have the whole package without
a) looking for a good toaster in the first place, and b) learning how
to use it.  Security operates similarly:  one boner mistake on anybody's
part - coder, engineer or administrator - and your "security" vaporizes
_instantly_.  Go read some of Bruce Schneier's screeds on the subject,
they're informative.

So yes, security most certainly _is_ partly a "process", various
opinions to the contrary notwithstanding.  It is identical to the
process of locking your doors and checking your windows before you
go to bed at night, or of making sure that you're not stuffing a paper
towel or a cardboard box top in your toaster in the morning before
you've had coffee.  You could call it "habitual prudence", I suppose.

Of course, computers being based on hard-core determinism and Boolean
logic, a higher standard is possible.  I note in passing that the
security of every operating system in common use (including OpenBSD) is
_unproven_ [1], with the possible exception of Coyotos.  Asserting
something that is unproven and which may actually be impossible to prove
("X is more secure than Y") is not a good idea.  In other words, don't
toss shit at the vendors unless you can _prove_, from a chain of
irrefutable deduction, that your proposed solution is "more secure" than
theirs.  (Something which is likely impossible, due to OpenBSD's design
and the language in which it is written.)  Hint:  the manpower,
brainpower, and computing power needed to accomplish this task _even if_
it is possible is probably going to exceed anything you're willing to
marshal to that end.

Theo is right about one thing, however:  Bugs and security flaws arise
from mistakes, every one of which is avoidable.  There are no "new"
classes of bugs or design flaws, essentially every one has been
generally known of and understood for decades.  It is only sloppy
practices - dare I say it, "bad processes" - that permit these bugs
to creep into various codebases and multiply.  The cure for this
problem is "better processes".  The "easy" cure is for these processes
to entail continuous auditing (the OBSD solution).  The harder cure
is to work on establishing and maintaining a process that incorporates
rigorous proof as a necessary component.  We may not ever see that, but
hey - it's nice to dream, isn't it?

--
(c) 2005 Unscathed Haze via Central Plexus <[EMAIL PROTECTED]>
I am Chaos.  I am alive, and I tell you that you are Free.  -Eris
Big Brother is watching you.  Learn to become Invisible.
|-------- Your message must be this wide to ride the Internet. --------|

[1]  Rigorous proof, that is.  Anecdotal evidence does not establish
proof of anything whatsoever.

Reply via email to