Re: [SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-05-04 Thread Tad Anhalt
Crispin Cowan wrote:
> Ok, someone has mentioned Ken Thompson's Turing Award speech in a "my
>  security is better than yours" flamewar^W discussion. This almost 
> warrants a security-geek version of Godwin's law :)

  That's fine.  I didn't bring it up, the original article did.  I still do
think anybody who touches code should at least read it and think about
what it means.

  If somebody wants to turn this into a flame war, carry on.  I'll move
along.  No need to invoke anything at this point.

> For a really interesting long-term extrapolation of this point of 
> view, I strongly recommend reading "A Deepness in the Sky" by Vernor
>  Vinge http://www.tor.com/sampleDeepness.html

  Good book, yes I would recommend it as well.  "A Fire Upon the Deep"
is also both a good read and further explores the concept of how
dangerous it is to play with "hardware" that you don't understand.

> It also leads to the classic security analysis technique of amassing 
> *all* the threats against your system, estimating the probability and
> severity of each threat, and putting most of your resources against
> the largest threats. IMHO if you do that, then you discover that
> "Trojans in the Linux code base" is a relatively minor threat

  Yes, that's where I would hope most professionals would end up.  I've
often wondered how many people end up with "Oh, well, I guess it
doesn't matter anyway..."

> compared to "crappy user passwords", "0-day buffer overflows", and 
> "lousy Perl/PHP CGIs on the web server". This Ken Thompson gedanken 
> experiment is fun for security theorists, but is of little practical
>  consequence to most users.

  The article wasn't about installing software for "most users,"  but
rather about what sort of software is appropriate for networked devices
on a battlefield.

  Yes, it read like a advertisement.  Yes, it specifically singled out
"linux" and "open source" where there was no need to.   Yes, it used a
ton of overblown and bad analogies...

  I was hoping for a discussion to emerge about building software for
similar environments.  If network devices deployed in a battle zone
isn't the right cup of tea, how about health monitors that will be
hooked to a hospital network?  Software that will run on devices
intended on being imbedded inside the body ala pacemakers or coclear
implants.  Voting machines.  ABS systems, airbag controllers.  ATM
machines...

  The risks forum (http://catless.ncl.ac.uk/Risks) does a good job
detailing the problems that can arise when developing these systems, but
isn't as geared towards detailed discussions of reasonable solutions to
those problems...  I was hoping this list might be a better place for
discussions of that nature.

Tad Anhalt




Re: [SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-05-03 Thread Glenn and Mary Everhart
Tad Anhalt wrote:

Jeremy Epstein wrote:

I agree with much of what he says about the potential for 
infiltration of bad stuff into Linux, but he's comparing apples and 
oranges.  He's comparing a large, complex open source product to a 
small, simple closed source  product.  I claim that if you ignore the
open/closed part, the difference in trustworthiness comes from the 
difference between small and large.


  It's a lot deeper than that.  Here's the link to the original Ken
Thompson speech for convenience sake:
http://www.acm.org/classics/sep95
  This should be required reading (with a test following) for everyone
who ever touches code IMHO.  Simple, elegant, understandable and
devastating.
  It's the difference between proving that there aren't problems and
hoping that there aren't problems.  Linux is really a peripheral issue.
 The same arguments could be used against any operating system and/or
software system that hasn't been designed and implemented from day 1
with this sort of issue in mind.
  A more interesting quote is this one:

"A few people who understood Ken Thompson’s paper wrote to me saying
that every operating system has this problem, so my indictment of Linux
security on this point is meaningless. They ask: “couldn’t someone at
Green Hills Software install a binary virus in the baseline Green Hills
Software compiler distribution and corrupt Green Hills Software’s
INTEGRITY operating system?” No, the FAA DO-178B Level A certification
process systematically checks every byte of object code of our
INTEGRITY-178B operating system to ensure that if malicious code is
introduced at any point throughout the tool chain (compiler, assembler,
linker, run-time libraries, etc.) it will be detected and removed. Since
INTEGRITY has only a few thousand lines of privileged-mode code, not the
millions of lines that burden Linux, this means of preventing viruses is
feasible for INTEGRITY, but not for Linux."
  How did they bootstrap their system?  In other words, how did they
ensure that they could trust their entire tool chain in the first place?
 They hint that the whole system was written by a few trusted persons.
Did they write the whole tool chain as well?  The scheme above protects
against future attack, but not against something that was there before
they started.  I'm sure that they have an answer for that question,
it's a pretty obvious one to ask...  Maybe I missed it on my read-through?
  That's the whole point of the Thompson lecture.  The hole is really
deep.  How far can you afford to dig?  How do you decide what to trust?
  Green Hills Software obviously has a vested interest in convincing the
reader that it's worth paying them whatever it is that they're charging
for the extra depth...  In some situations, it may be...  That's a risk
management decision.
Tad Anhalt




I remember back when DoD was trying to get a processor and OS combination
certified as Class A1 per the Orange Book. I also happened to be working (on
a separate project) with Richard Platek at the time, when he discovered
that the MLS prover had some failures that would allow it to miss insecure
data flows, thus throwing the whole proof into a cocked hat. This was
for I think SCOMP. Anyway, he was asked not to disclose this but felt he
had to, and did.
Now, the exception conditions were manually checked for and there was a 
certification that all was well, but underneath it all, the premise that a 
Federal certification of security was a guarantee was destroyed by that 
incident. And now we are expected to believe a commercial vendor's assurance 
that something is checking OBJECT CODE against a Thompson type Trojan?

Well, if the trojan is large enough and the rest of the code small, I could
believe it would be hard to hide. A real life intro of a subtlety that might
just involve a BGTR instead of a BGE branch somewhere, or a "=" instead of "=="
in C, could be much harder to find.
At any rate trusting a Federal cert lacks credibility taken alone.

Show us the high quality decompilations of large systems first. Then maybe such
a thing would be easier to believe.




Re: [SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-05-03 Thread Crispin Cowan
Tad Anhalt wrote:

Jeremy Epstein wrote:
 

I agree with much of what he says about the potential for 
infiltration of bad stuff into Linux, but he's comparing apples and 
oranges.  He's comparing a large, complex open source product to a 
small, simple closed source  product.  I claim that if you ignore the
open/closed part, the difference in trustworthiness comes from the 
difference between small and large.
   

 It's a lot deeper than that.  Here's the link to the original Ken
Thompson speech for convenience sake:
	http://www.acm.org/classics/sep95
 

Ok, someone has mentioned Ken Thompson's Turing Award speech in a "my 
security is better than yours" flamewar^W discussion. This almost 
warrants a security-geek version of Godwin's law :)

But taking the remark seriously, it says that you must not trust 
anything that  you don't have source code for. The point of Thompson's 
paper is that this includes the compiler; having the source code for the 
applications and the OS is not enough, and even having the source for 
the compiler is not enough unless you bootstrap it yourself.

Extrapolating from Thompson's point, the same can be said for silicon: 
how do we know that CPUs, chipsets, drive controllers, etc. don't have 
Trojan's in them? Just how hard would it be to insert a funny hook in an 
IDE drive that did something "interesting" when the right block sequence 
comes by.

For a really interesting long-term extrapolation of this point of view, 
I strongly recommend reading "A Deepness in the Sky" by Vernor Vinge 
http://www.tor.com/sampleDeepness.html

While it is a science fiction novel, Vinge is also a professor of 
computer science at UCSD, and a noted visionary in the future of 
computing, having won multiple Hugo awards. Vinge wrote the first 
cyberpunk story "True Names" in the mid-70s.

The horrible lesson from all this is that you cannot trust anything you 
do not control. And since you cannot build everything yourself, you 
cannot really trust anything. And thus you end up taking calculated 
guesses as to what you trust without verification. Reputation becomes a 
critical factor.

It also leads to the classic security analysis technique of amassing 
*all* the threats against your system, estimating the probability and 
severity of each threat, and putting most of your resources against the 
largest threats. IMHO if you do that, then you discover that "Trojans in 
the Linux code base" is a relatively minor threat compared to "crappy 
user passwords", "0-day buffer overflows", and "lousy Perl/PHP CGIs on 
the web server". This Ken Thompson gedanken experiment is fun for 
security theorists, but is of little practical consequence to most users.

Crispin

--
Crispin Cowan, Ph.D.  http://immunix.com/~crispin/
CTO, Immunix  http://immunix.com
Immunix 7.3   http://www.immunix.com/shop/



Re: [SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-04-30 Thread ljknews
At 7:31 PM -0500 4/29/04, Tad Anhalt wrote:



>  How did they bootstrap their system?  In other words, how did they
>ensure that they could trust their entire tool chain in the first place?
> They hint that the whole system was written by a few trusted persons.

Begging the question "trusted by whom?".  Some organizations require
"trusted by the agency issuing security clearances" for certain
(primarily non-tool) software.

>Did they write the whole tool chain as well?  The scheme above protects
>against future attack, but not against something that was there before
>they started.  I'm sure that they have an answer for that question,
>it's a pretty obvious one to ask...  Maybe I missed it on my read-through?
>
>  That's the whole point of the Thompson lecture.  The hole is really
>deep.  How far can you afford to dig?  How do you decide what to trust?

Ideally, if you find you cannot afford to dig far enough to satisfy your
need, a revision of your business plan is required.

>  Green Hills Software obviously has a vested interest in convincing the
>reader that it's worth paying them whatever it is that they're charging
>for the extra depth...  In some situations, it may be...  That's a risk
>management decision.

And one solution acceptable in many conditions is determining whether
the vendor has deep enough pockets that a lawsuit after the fact would
mean something.  I don't know much about finance, but I know that suing
Green Hills software has more potential than suing the person from whom
you got a copy of Linux.

Not all checks and balances are embedded in the software itself.




Re: [SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-04-30 Thread James Walden
Jeremy Epstein wrote:
I agree with much of what he says about the potential for infiltration of
bad stuff into Linux, but he's comparing apples and oranges.  He's comparing
a large, complex open source product to a small, simple closed source
product.  I claim that if you ignore the open/closed part, the difference in
trustworthiness comes from the difference between small and large.  That is,
if security is my concern, I'd choose a small open source product over a
large closed source, or a small closed source over a large open source... in
either case, there's some hope that there aren't bad things in there.
He makes three claims for greater security of his embedded OS:
	(1) A carefully controlled process for modifying source code.
	(2) Small size in terms of lines of code.
	(3) Auditing of the object code.
Certainly, a small, well-audited system is more likely to be secure than 
a large, poorly audited one.  Also, as there has been one discovered 
failed attempt to insert a backdoor into the Linux kernel, I agree that 
the potential for further such attacks exists.

However, his claim that Linux can never be made secure because it's too 
large to audit every time it changes is overstated.  He's ignoring the 
fact that few people (and even fewer in defence) will or should upgrade 
every time the kernel changes.  Widely used Linux distributions rarely 
include the latest kernel, even if your organization is using the latest 
distribution.  He's also confusing the difference between desktop and 
embedded Linux systems.  Yes, his embedded OS is small, but an embedded 
Linux system is going to be much smaller than the desktop distributions.

While kernel 2.6.5 may contain 5.46 million lines of code (counting 
blank lines and comments), much of that code is unlikely to be present 
in an embedded system.  After all, 2.72 million lines of code (49.8%) 
are drivers, 414,243 (7.6%) lines of code are sound-related, and 
another 514,262 (9.4%) lines are filesystem-related.  You're going to 
build your embedded system with the hardware drivers and filesystems 
that you need, not every possible device and obscure filesystem 
available.  The same is true for userspace setuid programs, which I'll 
not count as I'm not sure which ones would be necessary for the types of 
systems under discussion.

In summary, there are both fewer times and fewer lines of source code 
(and bytes of object code) that need to be audited than he claims. 
While auditing Linux is a more difficult task than auditing a smaller 
embedded OS, his claims are overblown since he ignores the fact that you 
only need to audit the parts and versions of the kernel (and OS) that 
you install and use when you install a new version.

--
James Walden, Ph.D.
Visiting Assistant Professor of EECS
The University of Toledo @ LCCC
http://www.eecs.utoledo.edu/~jwalden/



Re: [SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-04-30 Thread Tad Anhalt
Jeremy Epstein wrote:
> I agree with much of what he says about the potential for 
> infiltration of bad stuff into Linux, but he's comparing apples and 
> oranges.  He's comparing a large, complex open source product to a 
> small, simple closed source  product.  I claim that if you ignore the
>  open/closed part, the difference in trustworthiness comes from the 
> difference between small and large.

  It's a lot deeper than that.  Here's the link to the original Ken
Thompson speech for convenience sake:
http://www.acm.org/classics/sep95

  This should be required reading (with a test following) for everyone
who ever touches code IMHO.  Simple, elegant, understandable and
devastating.

  It's the difference between proving that there aren't problems and
hoping that there aren't problems.  Linux is really a peripheral issue.
 The same arguments could be used against any operating system and/or
software system that hasn't been designed and implemented from day 1
with this sort of issue in mind.

  A more interesting quote is this one:

"A few people who understood Ken Thompson’s paper wrote to me saying
that every operating system has this problem, so my indictment of Linux
security on this point is meaningless. They ask: “couldn’t someone at
Green Hills Software install a binary virus in the baseline Green Hills
Software compiler distribution and corrupt Green Hills Software’s
INTEGRITY operating system?” No, the FAA DO-178B Level A certification
process systematically checks every byte of object code of our
INTEGRITY-178B operating system to ensure that if malicious code is
introduced at any point throughout the tool chain (compiler, assembler,
linker, run-time libraries, etc.) it will be detected and removed. Since
INTEGRITY has only a few thousand lines of privileged-mode code, not the
millions of lines that burden Linux, this means of preventing viruses is
feasible for INTEGRITY, but not for Linux."

  How did they bootstrap their system?  In other words, how did they
ensure that they could trust their entire tool chain in the first place?
 They hint that the whole system was written by a few trusted persons.
Did they write the whole tool chain as well?  The scheme above protects
against future attack, but not against something that was there before
they started.  I'm sure that they have an answer for that question,
it's a pretty obvious one to ask...  Maybe I missed it on my read-through?

  That's the whole point of the Thompson lecture.  The hole is really
deep.  How far can you afford to dig?  How do you decide what to trust?

  Green Hills Software obviously has a vested interest in convincing the
reader that it's worth paying them whatever it is that they're charging
for the extra depth...  In some situations, it may be...  That's a risk
management decision.

Tad Anhalt





RE: [SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-04-29 Thread Jeremy Epstein
I agree with much of what he says about the potential for infiltration of
bad stuff into Linux, but he's comparing apples and oranges.  He's comparing
a large, complex open source product to a small, simple closed source
product.  I claim that if you ignore the open/closed part, the difference in
trustworthiness comes from the difference between small and large.  That is,
if security is my concern, I'd choose a small open source product over a
large closed source, or a small closed source over a large open source... in
either case, there's some hope that there aren't bad things in there.

Comparing Linux to his proprietary system is just setting up a strawman.
of course the fact that he's selling something that conveniently replaces
the strawman he knocks down is simply a coincidence

--Jeremy

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED]
> Behalf Of [EMAIL PROTECTED]
> Sent: Thursday, April 29, 2004 2:32 PM
> To: Kenneth R. van Wyk
> Cc: [EMAIL PROTECTED]
> Subject: [SC-L] Re: White paper: "Many Eyes" - No Assurance 
> Against Many
> Spies
> 
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Kenneth R. van Wyk wrote:
> 
> >FYI, there's a white paper out by Dan O'Dowd of Green Hills 
> Software (see 
> >http://www.ghs.com/linux/manyeyes.html) that "It is trivial 
> to infiltrate the 
> >loose association of Linux organizations which have 
> developers all over the 
> >world, especially when these organizations don't even try to prevent 
> >infiltration, they accept code from anyone."
> 
> And he's selling us the solution, how convenient. :\  Hmm.
> 
> Leaving aside the couple of obvious problems with this essay's
> arguments, I'll note that some of the author's points are valid.  It
> puzzles me that many otherwise security-conscious people have 
> no qualms
> downloading and installing whatever they fancy with little thought to
> the source or the author's motives.  It is indeed a pretty 
> loose network
> which supports much of what we know as GNU/Linux.  That is 
> less true of
> FreeBSD and even less of OpenBSD.
> 
> - -d
> 
> - -- 
> David Talkington
> [EMAIL PROTECTED]
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.2.4 (GNU/Linux)
> 
> iD8DBQFAkUoT5FKhdwBLj4sRAluEAJ4oaUqtTrKPsOpaTiRJ9vycDhlwMACgo6D3
> M/i6mUw7n6wm2c64aBIaPwk=
> =NAeE
> -END PGP SIGNATURE-
> 
> 




[SC-L] Re: White paper: "Many Eyes" - No Assurance Against Many Spies

2004-04-29 Thread dtalk-ml
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Kenneth R. van Wyk wrote:

>FYI, there's a white paper out by Dan O'Dowd of Green Hills Software (see 
>http://www.ghs.com/linux/manyeyes.html) that "It is trivial to infiltrate the 
>loose association of Linux organizations which have developers all over the 
>world, especially when these organizations don't even try to prevent 
>infiltration, they accept code from anyone."

And he's selling us the solution, how convenient. :\  Hmm.

Leaving aside the couple of obvious problems with this essay's
arguments, I'll note that some of the author's points are valid.  It
puzzles me that many otherwise security-conscious people have no qualms
downloading and installing whatever they fancy with little thought to
the source or the author's motives.  It is indeed a pretty loose network
which supports much of what we know as GNU/Linux.  That is less true of
FreeBSD and even less of OpenBSD.

- -d

- -- 
David Talkington
[EMAIL PROTECTED]
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFAkUoT5FKhdwBLj4sRAluEAJ4oaUqtTrKPsOpaTiRJ9vycDhlwMACgo6D3
M/i6mUw7n6wm2c64aBIaPwk=
=NAeE
-END PGP SIGNATURE-