On Mon, Aug 12, 2002 at 12:38:42AM -0700, Brian A. LaMacchia wrote:
| > I don't know for sure whether Microsoft is going to do this, or is
| > even capable of doing this.  It would be a cool idea.  It also isn't
| > sufficient to address all questions about deliberate malfeasance.
| > Back in the Clipper days, one question about Clipper's security was
| > "how do we know the Clipper spec is secure?" (and the answer actually
| > turned out to be "it's not").  But a different question was "how do
| > we know that this tamper-resistant chip produced by Mykotronix even
| > implements the Clipper spec correctly?".
| >
| > The corresponding questions in Palladium are "how do we know that the
| > Palladium specs (and Microsoft's nub implementation) are secure?" and
| > "how do we know that this tamper-resistant chip produced by a
| > Microsoft contractor even implements the Palladium specs correctly?".
| There are two parts to answering the first question:
| 1) People (many people, the more the merrier) need to understand the code
| and what it does, and thus be in a position to be able to make an informed
| decision about whether or not they trust it.
| 2) People reviewing the code, finding security flaws, and then reporting
| them so that we can fix them
| These are two very different things.  I don't think that anyone should count
| on the goodwill of the general populace to make their code proveably secure.
| I think that paying people who are experts at securing code to find exploits
| in it must be part of the development process.

How are these different?  If I'm understanding the code to decide if I
trust it (item 1), it seems to me that I must do at least 2A and 2B:
2C is optional :)

Or are you saying that (2) is done by internal folks, and thus is a
smaller set than (1)?


"It is seldom that liberty of any kind is lost all at once."

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to