| >>> We've met the enemy, and he is us.  *Any* secure computing kernel
| >>> that can do
| >>> the kinds of things we want out of secure computing kernels, can also
| >>> do the
| >>> kinds of things we *don't* want out of secure computing kernels.
| >>
| >> I don't understand why you say that.  You can build perfectly good
| >> secure computing kernels that don't contain any support for remote
| >> attribution.  It's all about who has control, isn't it?
| >>
| >There is no control of your system with remote attestation. Remote
| >attestation simply allows the distant end of a communication to
| >determine if your configuration is acceptable for them to communicate
| >with you.
| But you missed my main point.  Leichter claims that any secure kernel is
| inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
| My main point is that this is simply not so.
| There are two very different pieces here: that of a secure kernel, and
| that of remote attestation.  They are separable.  TCPA and Palladium
| contain both pieces, but that's just an accident; one can easily imagine
| a Palladium-- that doesn't contain any support for remote attestation
| whatsoever.  Whatever you think of remote attestation, it is separable
| from the goal of a secure kernel.
| This means that we can have a secure kernel without all the harms.
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.  Leichter's claim
| is wrong....
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

The issues have been discussed by others in this stream of messages, but
lets pull them together.  Suppose I wished to put together a secure system.
I choose my open-source software, perhaps relying on the word of others,
perhaps also checking it myself.  I choose a suitable hardware base.  I put
my system together, install my software - voila, a secure system.  At least,
it's secure at the moment in time.  How do I know, the next time I come to
use it, that it is *still* secure - that no one has slipped in and modified
the hardware, or found a bug and modified the software?

I can go for physical security.  I can keep the device with me all the time,
or lock it in a secure safe.  I can build it using tamper-resistant and
tamper-evident mechanisms.  If I go with the latter - *much* easier - I have
to actually check the thing before using it, or the tamper evidence does me
no good ... which acts as a lead-in to the more general issue.

Hardware protections are fine, and essential - but they can only go so far.
I really want a software self-check.  This is an idea that goes way back:
Just as the hardware needs to be both tamper-resistent and tamper-evident,
so for the software.  Secure design and implementation gives me tamper-
resistance.  The self-check gives me tamper evidence.  The system must be able
to prove to me that it is operating as it's supposed to.

OK, so how do I check the tamper-evidence?  For hardware, either I have to be
physically present - I can hold the box in my hand and see that no one has
broken the seals - or I need some kind of remote sensor.  The remote sensor
is a hazard:  Someone can attack *it*, at which point I lose my tamper-

There's no way to directly check the software self-check features - I can't
directly see the contents of memory! - but I can arrange for a special highly-
secure path to the self-check code.  For a device I carry with me, this could
be as simple as a "self-check passed" LED controlled by dedicated hardware
accessible only to the self-check code.  But how about a device I may need
to access remotely?  It needs a kind of remote attestation - though a
strictly limited one, since it need only be able to attest proper operation
*to me*.  Still, you can see the slope we are on.

The slope gets steeper.  *Some* machines are going to be shared.  Somewhere
out there is the CVS repository containing the secure kernel's code.  That
machine is updated by multiple developers - and I certainly want *it* to be
running my security kernel!  The developers should check that the machine is
configured properly before trusting it, so it should be able to give a
trustworthy indication of its own trustworthiness to multiple developers.
This *could* be based on a single secret shared among the machine and all
the developers - but would you really want it to be?  Wouldn't it be better
if each developer shared a unique secret with the machine?

You can, indeed, stop anywhere along this slope.  You can decide you really
don't need remote attestation, even for yourself - you'll carry the machine
with you, or only use it when you are physically in front of it.  Or you
can decide you have no interest in ever sharing the machine.  Or you can
decide that you'll share the machine by using a single shared secret.

The problem is that others will want to make different choices.  The world is
unlikely to support the development of multiple distinct secure kernels.
Instead, there will be one such kernel - and configuration options.  You want
a local "OK" light only?  Fine, configure your machine that way.  You want
a shared-via-shared-secret common machine?  OK.  In that sense, you're
absolutely right:  You will be able to have a secure kernel that doesn't, as
you configured it, support the TCPA "baddies".

But once such a kernel is widely available, anyone will also be able to
configure it to provide everything up to remote attestation as well.  In
fact, there will be reasons why you'll want to do that anyway - regardless of
whether you want to do business with those who insist that your system attest
itself to them.  This will make it practical - in fact, easy - for businesses
to insist that you enable these features if you want to talk to them.

Consider browser cookies.  Every browser today supports them.  Everyone knows
that.  Some sites don't work correctly if cookies are disabled.  Some (Ebay
is, I believe, an example) are quite explicit that they will not deal with you
if you don't enable cookies.  If enabling cookies required you to load and run
some special software on your machine, sites would have a much harder time
enforcing such a requirement.  As it is, they have no problem at all.

In the same way, I claim, if secure kernel software becomes widely used, it
*will* have a "remote attestation-shaped hole" - and the block to fill that
hole will be readily available.  You will have the choice not to put the
block in the hole - but then you have the option of disabling remote
attestation in TCPA, too.  Whether that choice will be one you can effectively
exercise is an entirely different question.  My opinion is that we are all too
likely to find that choice effectively removed by political and market forces.
It's those forces that need to be resisted.  The ubiquity of readily available
remote attestation is a probably foregone conclusion; the loss of any choice
in how to respond to that might not be.

                                                        -- Jerry

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to