On Mon, 15 Dec 2003, Jerrold Leichter wrote:

> | This is quite an advantage of smart cards.
> However, this advantage is there only because there are so few smart cards,
> and so few smart card enabled applications, around.

Strangely enough, Carl Ellison assumed that you would have at most one
smart card, anyway. I'd rather think you are right, here.

> Really secure mail *should* use its own smart card.  When I do banking, do
> I have to remove my mail smart card?  Encryption of files on my PC should
> be based on a smart card.  Do I have to pull that one out?  Does that mean
> I can't look at my own records while I'm talking to my bank?  If I can only
> have one smart card in my PC at a time, does that mean I can *never* cut and
> paste between my own records and my on-line bank statement?  To access my
> files and my employer's email system, do I have to have to trust a single
> smart card to hold both sets of secrets?

I agree with you: A good compromise between security and convenience is an
issue, when you are changing between different smart cards. E.g., I could
imagine using the smart card *once* when logging into my bank account,
and then only needing it, perhaps, to authorise a money transfer.

This is a difficult user interface issue, but something we should be able
to solve.

One problem of TCPA is the opposite user interface issue -- the user has
lost control over what is going on. (And I believe that this originates
much of the resistance against TCPA.)

> Ultimately, to be useful a trusted kernel has to be multi-purpose, for
> exactly the same reason we want a general-purpose PC, not a whole bunch
> of fixed- function appliances.  Whether this multi-purpose kernel will
> be inside the PC, or a separate unit I can unplug and take with me, is a
> separate issue. Give the current model for PC's, a separate "key" is
> probably a better approach.


> However, there are already experiments with "PC in my pocket" designs:
> A small box with the CPU, memory, and disk, which can be connect to a
> small screen to replace a palmtop, or into a unit with a big screen, a
> keyboard, etc., to become my desktop.  Since that small box would have
> all my data, it might make sense for it to have the trusted kernel.
> (Of course, I probably want *some* part to be separate to render the box
> useless is stolen.)

Agreed again!

> | There is nothing wrong with the idea of a trusted kernel, but "trusted"
> | means that some entity is supposed to "trust" the kernel (what else?). If
> | two entities, who do not completely trust each other, are supposed to both
> | "trust" such a kernel, something very very fishy is going on.
> Why?  If I'm going to use a time-shared machine, I have to trust that the
> OS will keep me protected from other users of the machine.  All the other
> users have the same demands.  The owner of the machine has similar demands.

Actually, all users have to trust the owner (or rather the sysadmin).

The key words are "have to trust"! As you wrote somewhere below:

> Part of the issue with TCPA is that the providers of the kernel that we
> are all supposed to trust blindly are also going to be among those who
> will use it heavily.  Given who those producers are, that level of trust
> is unjustifiable.

I entirely agree with you!

> | More than ten years ago, Chaum and Pedersen


> |    +---------------+         +---------+         +---------------+
> |    | Outside World | <-----> | Your PC | <-----> | TCPA-Observer |
> |    +---------------+         +---------+         +---------------+
> |
> | TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and is
> | thus open to abuse.

> I remember looking at this paper when it first appeared, but the details
> have long faded.  It's an alternative mechanism for creating trust:
> Instead of trusting an open, independently-produced, verified
> implementation, it uses cryptography to construct walls around a
> proprietary, non-open implementation that you have no reason to trust.

Please re-read the paper!

First, it is not a mechanism for *creating* trust.

It is rather a trust-avoidance mechanism! You are not trusting the
observer at all, and you don't need to. The outsider is not trusting you
or your PC at all, and she donesn't need to.

Second, how on earth did you get the impression that Chaum/Pedersen is
about proprietary non open implenentations?

Nothing stops people from producing independent and verified
implementations. As a matter of fact, since people can concentrate on
writing independent and verified implementations for the sofware on "Your
PC", providing an independently produced and verified implementation woud
be much much simpler than ever providing such an implementation for the
TCPA hardware.

Independent implementations of the observer's soft- and hardware are
simpler than in the case of TCPA as well, but this is a minor issue. You
don't need to trust the observer, so you don't care about independent and
verified implementations.

With a Chaum/Pedersen style scheme, the chances of ever getting
independent (and possibly verified) implementations are much better than
in the case of TCPA.

As I wrote in my response to Carl Ellison's response, one of the main
advantages of the Chaum/Pedersen style approach is a clear separation of
duties. The TCPA misses this separation, and this is a sign of bad
security design.

Stefan Lucks      Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
            e-mail: [EMAIL PROTECTED]
            home: http://th.informatik.uni-mannheim.de/people/lucks/
------  I  love  the  smell  of  Cryptanalysis  in  the  morning!  ------

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to