Re: [SC-L] Re: White paper: Many Eyes - No Assurance Against Many Spies

2004-05-04 Thread Tad Anhalt
Crispin Cowan wrote:
 Ok, someone has mentioned Ken Thompson's Turing Award speech in a my
  security is better than yours flamewar^W discussion. This almost 
 warrants a security-geek version of Godwin's law :)

  That's fine.  I didn't bring it up, the original article did.  I still do
think anybody who touches code should at least read it and think about
what it means.

  If somebody wants to turn this into a flame war, carry on.  I'll move
along.  No need to invoke anything at this point.

 For a really interesting long-term extrapolation of this point of 
 view, I strongly recommend reading A Deepness in the Sky by Vernor
  Vinge http://www.tor.com/sampleDeepness.html

  Good book, yes I would recommend it as well.  A Fire Upon the Deep
is also both a good read and further explores the concept of how
dangerous it is to play with hardware that you don't understand.

 It also leads to the classic security analysis technique of amassing 
 *all* the threats against your system, estimating the probability and
 severity of each threat, and putting most of your resources against
 the largest threats. IMHO if you do that, then you discover that
 Trojans in the Linux code base is a relatively minor threat

  Yes, that's where I would hope most professionals would end up.  I've
often wondered how many people end up with Oh, well, I guess it
doesn't matter anyway...

 compared to crappy user passwords, 0-day buffer overflows, and 
 lousy Perl/PHP CGIs on the web server. This Ken Thompson gedanken 
 experiment is fun for security theorists, but is of little practical
  consequence to most users.

  The article wasn't about installing software for most users,  but
rather about what sort of software is appropriate for networked devices
on a battlefield.

  Yes, it read like a advertisement.  Yes, it specifically singled out
linux and open source where there was no need to.   Yes, it used a
ton of overblown and bad analogies...

  I was hoping for a discussion to emerge about building software for
similar environments.  If network devices deployed in a battle zone
isn't the right cup of tea, how about health monitors that will be
hooked to a hospital network?  Software that will run on devices
intended on being imbedded inside the body ala pacemakers or coclear
implants.  Voting machines.  ABS systems, airbag controllers.  ATM
machines...

  The risks forum (http://catless.ncl.ac.uk/Risks) does a good job
detailing the problems that can arise when developing these systems, but
isn't as geared towards detailed discussions of reasonable solutions to
those problems...  I was hoping this list might be a better place for
discussions of that nature.

Tad Anhalt




Re: [SC-L] Re: White paper: Many Eyes - No Assurance Against Many Spies

2004-04-30 Thread James Walden
Jeremy Epstein wrote:
I agree with much of what he says about the potential for infiltration of
bad stuff into Linux, but he's comparing apples and oranges.  He's comparing
a large, complex open source product to a small, simple closed source
product.  I claim that if you ignore the open/closed part, the difference in
trustworthiness comes from the difference between small and large.  That is,
if security is my concern, I'd choose a small open source product over a
large closed source, or a small closed source over a large open source... in
either case, there's some hope that there aren't bad things in there.
He makes three claims for greater security of his embedded OS:
	(1) A carefully controlled process for modifying source code.
	(2) Small size in terms of lines of code.
	(3) Auditing of the object code.
Certainly, a small, well-audited system is more likely to be secure than 
a large, poorly audited one.  Also, as there has been one discovered 
failed attempt to insert a backdoor into the Linux kernel, I agree that 
the potential for further such attacks exists.

However, his claim that Linux can never be made secure because it's too 
large to audit every time it changes is overstated.  He's ignoring the 
fact that few people (and even fewer in defence) will or should upgrade 
every time the kernel changes.  Widely used Linux distributions rarely 
include the latest kernel, even if your organization is using the latest 
distribution.  He's also confusing the difference between desktop and 
embedded Linux systems.  Yes, his embedded OS is small, but an embedded 
Linux system is going to be much smaller than the desktop distributions.

While kernel 2.6.5 may contain 5.46 million lines of code (counting 
blank lines and comments), much of that code is unlikely to be present 
in an embedded system.  After all, 2.72 million lines of code (49.8%) 
are drivers, 414,243 (7.6%) lines of code are sound-related, and 
another 514,262 (9.4%) lines are filesystem-related.  You're going to 
build your embedded system with the hardware drivers and filesystems 
that you need, not every possible device and obscure filesystem 
available.  The same is true for userspace setuid programs, which I'll 
not count as I'm not sure which ones would be necessary for the types of 
systems under discussion.

In summary, there are both fewer times and fewer lines of source code 
(and bytes of object code) that need to be audited than he claims. 
While auditing Linux is a more difficult task than auditing a smaller 
embedded OS, his claims are overblown since he ignores the fact that you 
only need to audit the parts and versions of the kernel (and OS) that 
you install and use when you install a new version.

--
James Walden, Ph.D.
Visiting Assistant Professor of EECS
The University of Toledo @ LCCC
http://www.eecs.utoledo.edu/~jwalden/