Seth David Schoen <[EMAIL PROTECTED]> wrote:
> R. Hirschfeld writes:
>
>>> From: "Peter N. Biddle" <[EMAIL PROTECTED]>
>>> Date: Mon, 5 Aug 2002 16:35:46 -0700
>>
>>> You can know this to be true because the
>>> TOR will be made available for review and thus you can read the
>>> source and decide for yourself if it behaves this way.
>>
>> This may be a silly question, but how do you know that the source
>> code provided really describes the binary?
>>
>> It seems too much to hope for that if you compile the source code
>> then the hash of the resulting binary will be the same, as the
>> binary would seem to depend somewhat on the compiler and the
>> hardware you compile on.
>
> I heard a suggestion that Microsoft could develop (for this purpose)
> a provably-correct minimal compiler which always produced identical
> output for any given input.  If you believe the proof of correctness,
> then you can trust the compiler; the compiler, in turn, should produce
> precisely the same nub when you run it on Microsoft's source code as
> it did when Microsoft ran it on Microsoft's source code (and you can
> check the nub's hash, just as the SCP can).

We are talking internally about doing exact what is suggested here. We have
also
thought about working with someone to offer a free compilation service in
conjunction with the compiler, so you could compare your results on your
compiler with someone elses results on theirs - they could provide you with
a benchmark, so to speak. This directly bears on how people assert the
trustworthiness of their applications, so whether or not we do it, for
certain applications some kind of system is bound to spring up.

> I don't know for sure whether Microsoft is going to do this, or is
> even capable of doing this.  It would be a cool idea.  It also isn't
> sufficient to address all questions about deliberate malfeasance.
> Back in the Clipper days, one question about Clipper's security was
> "how do we know the Clipper spec is secure?" (and the answer actually
> turned out to be "it's not").  But a different question was "how do
> we know that this tamper-resistant chip produced by Mykotronix even
> implements the Clipper spec correctly?".
>
> The corresponding questions in Palladium are "how do we know that the
> Palladium specs (and Microsoft's nub implementation) are secure?" and
> "how do we know that this tamper-resistant chip produced by a
> Microsoft contractor even implements the Palladium specs correctly?".

There are two parts to answering the first question:

1) People (many people, the more the merrier) need to understand the code
and what it does, and thus be in a position to be able to make an informed
decision about whether or not they trust it.
2) People reviewing the code, finding security flaws, and then reporting
them so that we can fix them

These are two very different things.  I don't think that anyone should count
on the goodwill of the general populace to make their code proveably secure.
I think that paying people who are experts at securing code to find exploits
in it must be part of the development process.

We're also making some changes to our internal engineering processed that we
hope will help us identify security bugs earlier in the development cycle.
For example, we are generating our header files from specs (yes, actualy
specs!). Any change to an interface must first be made to the spec, the spec
must be reviewed, and only then will the change be propogated into the code,
at which point the code will be code reviewed. This doesn't fix coding bugs;
it does however aim
squarely at fixing design and architectural flaws.  We will make the specs &
software widely available.  (That doesn't make the code more secure, of
course, but it does provide a basis for analysis & comparison.)

                    --bal

> In that sense, TCPA or Palladium can _reduce_ the size of the hardware
> trust problem (you only have to trust a small number of components,
> such as the SCP), and nearly eliminate the software trust problem, but
> you still don't have an independent means of verifying that the logic
> in the tamper-resistant chip performs according to its specifications.
> (In fact, publishing the plans for the chip would hardly help there.)
>
> This is a sobering thought, and it's consistent with ordinary security
> practice, where security engineers try to _reduce_ the number of
> trusted system components.  They do not assume that they can eliminate
> trusted components entirely.  In fact, any demonstration of the
> effectiveness of a security system must make some assumptions,
> explicit or implicit.  As in other reasoning, when the assumptions are
> undermined, the demonstration may go astray.
>
> The chip fabricator can still -- for example -- find a covert channel
> within a protocol supported by the chip, and use that covert channel
> to leak your keys, or to leak your serial number, or to accept secret,
> undocumented commands.
>
> This problem is actually not any _worse_ in Palladium than it is in
> existing hardware.  I am typing this in an ssh window on a Mac laptop.
> I can read the MacSSH source code (my client) and the OpenSSH source
> code (the server listening at the other end), and I can read specs for
> most of the software and most of the parts which make up this laptop,
> but I can't independently verify that they actually implement the
> specs, the whole specs, and nothing but the specs.
>
> As Ken Thompson pointed out in "Reflections on Trusting Trust", the
> opportunities for introducing backdoors in hardware or software run
> deep, and can conceivably survive multiple generations, as though they
> were viruses capable of causing Lamarckian mutations which cause the
> cells of future generations to produce fresh virus copies.  Even if I
> have a Motorola databook for the CPU in this iBook, I won't know
> whether the microcode inside that CPU is compliant with the spec, or
> whether it might contain back doors which can be used against me
> somehow.  It's technically conceivable that the CPU microcode on this
> machine understands MacOS, ssh, vt100, and vi, and is programmed to
> detect BWA HA HA! arguments about trusted computing and invisibly
> insert errors into them.  I would never know.
>
> This problem exists with or without Palladium.  Palladium would
> provide a new place where a particular vendor could put
> security-critical (trusted) logic without direct end-user
> accountability.  But there are already several such places in the
> PC.  I don't think that trust-bootstrapping problem can ever be
> overcome, although maybe it's possible to chip away at it.  There is
> a much larger conversation about trusted computing in general, which
> we ought to be having:
>
> What would make you want to enter sensitive information into a
> complicated device, built by people you don't know, which you can't
> take apart under a microscope?
>
> That device doesn't have to be a computer.


---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to