Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-09 Thread Peter Gutmann
Seth David Schoen <[EMAIL PROTECTED]> writes:

>You could conceivably have a PC where the developers don't trust Linus, but
>instead trust the PC manufacturer.  The PC manufacturer could have made it
>extremely expensive for Linus to tamper with the PC in order to "violate [the
>developers'] security model".  (It isn't logically impossible, it's just
>extremely expensive.  Perhaps it costs millions of dollars, or something.)

Do you trust the PC manufacturer though?  Let's assume:

  - The security target is DRM.
  - The users of the security are content providers.
  - The threat model/attackers are consumers.

In other words the content providers have to trust the PC manufacturer to
provide security to restrict what the consumer can do.  The PC manufacturer's
motivation though is to enhance their own revenue stream by selling as much
hardware as they can, not to enhance the content provider's revenue stream by
making it as crippled as they can.  If you look at the DVD market, every DVD
player made outside the US has some form of vulcan nerve pinch that you can
apply that'll remove the enforcement of region coding.  Depending on your
country's view of differential pricing enforcement mechanisms, this may be
enabled by default (I don't know if you can buy a DVD player in NZ that
enforces region coding, if I was more of an enterpreneur I could start a good
business exporting universal-region, universal-format players to the US :-),
or at least will be a widely-known secret that everyone applies ten minutes
after they first plug the device in.  Given that the goals of the hardware
vendor and the user (= content provider) are mutually exclusive, I wouldn't
trust the hardware vendor.  Conversely, as a consumer, I would trust the
hardware vendor to provide backdoors to bypass the DRM in order to sell more
units after the first year or two, when it's become a commodity and they need
some other way of competing for sales than the novelty value.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-07 Thread lynn
aka ... in some sense the reply
http://www.garlic.com/~lynn/aadsm17.htm#0
is also attempting to keep separate the business processes of 
identification and authentication. Will it continue to be allowed to have 
authentication events (i can prove that i'm authorized to do something) w/o 
also always mandating that whenever there is an authentication operation 
will proof of identity always also be required?

today, I supposedly can open an ISP account, provide proof of payment and 
supply a password for that account (w/o also having to provide a gene 
pattern for identification). Would it ever be possible to simply substitute 
a digital signature and public key in lieu of a password when opening an 
account (where that digital signature may have been performed by a hardware 
token)?

will it continue to be possible in the future to have separation between 
authentication and identification business processes ... or will at some 
time, things change and all authentication events also always require 
identification?



--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-04 Thread Jerrold Leichter
| David Wagner writes:
|
| > To see why, let's go back to the beginning, and look at the threat
| > model.  If multiple people are doing shared development on a central
| > machine, that machine must have an owner -- let's call him Linus.  Now
| > ask yourself: Do those developers trust Linus?
| >
| > If the developers don't trust Linus, they're screwed.  It doesn't how
| > much attestation you throw at the problem, Linus can always violate their
| > security model.  As always, you've got to trust "root" (the system
| > administrator); nothing new here.
| >
| > Consequently, it seems to me we only need to consider a threat model
| > where the developers trust Linus.  (Linus need not be infallible, but the
| > developers should believe Linus won't intentionally try to violate their
| > security goals.)  In this case, owner-directed attestation suffices.
| > Do you see why?  Linus's machine will produce an attestation, signed
| > by Linus's key, of what software is running.  Since the developers trust
| > Linus, they can then verify this attestation.  Note that the developers
| > don't need to trust each other, but they do need to trust the owner/admin
| > of the shared box.  So, it seems to me we can get by without third-party
| > attestation.
|
| You could conceivably have a PC where the developers don't trust
| Linus, but instead trust the PC manufacturer.  The PC manufacturer
| could have made it extremely expensive for Linus to tamper with the PC
| in order to "violate [the developers'] security model".  (It isn't
| logically impossible, it's just extremely expensive.  Perhaps it costs
| millions of dollars, or something.)
Precisely - though see below.

| There are computers like that today.  At least, there are devices that can
| run software, that are highly tamper-resistant, and that can do attestations.
Smart cards are intended to work this way, too.

| (Now there is an important question about what the cost to do a hardware
| attack against those devices would be.)  It seems to me to be a good thing
| that the ordinary PC is not such a device.  (Ryan Lackey, in a talk
| about security for colocated machines, described using devices like
| these for colocation where it's not appropriate or desirable to rely on
| the physical security of the colocated machine.  Of course, strictly
| speaking, all security always relies on physical security.)
This kind of thing goes *way* back.  In the '70's, there was a company - I
think the name was BASIC 4 - that sold a machine with two privileged levels.
The OS ran at level 1 (user code at unprivileged level 2, of course).  There
were some things - like, probably, accounting - that ran at level 0.  Even
with physical access to the machine, it was supposed to be difficult to do
anything to level 0 - unless you had a (physical) key to use in the lock
on the front panel.  The machine was intended as a replacement for the then-
prevalent time-sharing model:  An application developer would buy machines
from the manufacturer, load them with application environments, and sell
application services.  Users of the machines could use the applications with
fast local acceess, even do development - but could not modify the basic
configuration.  I know the company vanished well before networks got fast
enough, and PC's cheap enough, the the business model stopped making any
sense; but I know nothing of the details.

| I don't know how the key management works in these devices.  If the
| keys used to sign attestations are loaded by (or known to) the device
| owner, it wouldn't help with the case where the device owner is
| untrusted.  If the keys are loaded by the manufacturer, it might
| support a model where the owner is untrusted and the manufacturer is
| trusted.
There's no more reason that the manufacturer has to be trusted than that the
manufacturer of a safe has to be trusted (at least in the sense that neither
needs to know the keys/combination on any particular machine/safe).  If
machines like this are to be built, they should require some special physical
override to allow the keys to be configured.  A key lock is still good
technology for this purpose:  It's a very well-understood technology, and its
simplicity is a big advantage.  A combination lock might be easier to
integrate securely, for the same basic reason that combination locks became
the standard for bank vaults:  No need for an open passageway from the outside
to the inside.  (In the bank vault case, this passageway was a great way to
get nitroglycerin inside the locking mechanism.)  In either case, you could
(like a bank) use a form of secret sharing, so that only a trusted group of
people - with multiple keys, or multiple parts of the combination - could
access the key setup mode.  Given this, there is no reason why a machine fresh
from the manufacturer need have any embedded keys.

Will machines like this be built?  Probably not, except for special purposes.
The TCPA machines will likely require you (and

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-04 Thread Seth David Schoen
David Wagner writes:

> To see why, let's go back to the beginning, and look at the threat
> model.  If multiple people are doing shared development on a central
> machine, that machine must have an owner -- let's call him Linus.  Now
> ask yourself: Do those developers trust Linus?
> 
> If the developers don't trust Linus, they're screwed.  It doesn't how
> much attestation you throw at the problem, Linus can always violate their
> security model.  As always, you've got to trust "root" (the system
> administrator); nothing new here.
> 
> Consequently, it seems to me we only need to consider a threat model
> where the developers trust Linus.  (Linus need not be infallible, but the
> developers should believe Linus won't intentionally try to violate their
> security goals.)  In this case, owner-directed attestation suffices.
> Do you see why?  Linus's machine will produce an attestation, signed
> by Linus's key, of what software is running.  Since the developers trust
> Linus, they can then verify this attestation.  Note that the developers
> don't need to trust each other, but they do need to trust the owner/admin
> of the shared box.  So, it seems to me we can get by without third-party
> attestation.

You could conceivably have a PC where the developers don't trust
Linus, but instead trust the PC manufacturer.  The PC manufacturer
could have made it extremely expensive for Linus to tamper with the PC
in order to "violate [the developers'] security model".  (It isn't
logically impossible, it's just extremely expensive.  Perhaps it costs
millions of dollars, or something.)

There are computers like that today.  At least, there are devices that can
run software, that are highly tamper-resistant, and that can do attestations.
(Now there is an important question about what the cost to do a hardware
attack against those devices would be.)  It seems to me to be a good thing
that the ordinary PC is not such a device.  (Ryan Lackey, in a talk
about security for colocated machines, described using devices like
these for colocation where it's not appropriate or desirable to rely on
the physical security of the colocated machine.  Of course, strictly
speaking, all security always relies on physical security.)

I don't know how the key management works in these devices.  If the
keys used to sign attestations are loaded by (or known to) the device
owner, it wouldn't help with the case where the device owner is
untrusted.  If the keys are loaded by the manufacturer, it might
support a model where the owner is untrusted and the manufacturer is
trusted.

-- 
Seth David Schoen <[EMAIL PROTECTED]> | Very frankly, I am opposed to people
 http://www.loyalty.org/~schoen/   | being programmed by others.
 http://vitanuova.loyalty.org/ | -- Fred Rogers (1928-2003),
   |464 U.S. 417, 445 (1984)

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-03 Thread David Wagner
Jerrold Leichter  wrote:
>All of this is fine as long as there is a one-to-one association between
>machines and "owners" of those machines.  Consider the example I gave
>earlier:  A shared machine containing the standard distribution of the
>trusted computing software.  All the members of the group that maintain the
>software will want to have the machine attest, to them, that it is properly
>configured and operating as intended.

I think you may be giving up too quickly.  It looks to me like
this situation can be handled by owner-directed attestation (e.g.,
Owner Override, or Owner Gets Key).  Do you agree?

To see why, let's go back to the beginning, and look at the threat
model.  If multiple people are doing shared development on a central
machine, that machine must have an owner -- let's call him Linus.  Now
ask yourself: Do those developers trust Linus?

If the developers don't trust Linus, they're screwed.  It doesn't how
much attestation you throw at the problem, Linus can always violate their
security model.  As always, you've got to trust "root" (the system
administrator); nothing new here.

Consequently, it seems to me we only need to consider a threat model
where the developers trust Linus.  (Linus need not be infallible, but the
developers should believe Linus won't intentionally try to violate their
security goals.)  In this case, owner-directed attestation suffices.
Do you see why?  Linus's machine will produce an attestation, signed
by Linus's key, of what software is running.  Since the developers trust
Linus, they can then verify this attestation.  Note that the developers
don't need to trust each other, but they do need to trust the owner/admin
of the shared box.  So, it seems to me we can get by without third-party
attestation.

This scenario sounds pretty typical to me.  Most machines have a single
owner.  Most machines have a system administrator (who must be trusted).
I don't think I'm making unrealistic assumptions.

>You're trying to make the argument that feature X (here, remote attestation for
>multiple mutually-suspicious parties) has no significant uses.  Historically,
>arguments like this are losers.

Yes, this is a fair point.  I suppose I would say I'm arguing that
feature X (third-party attestation) seems to have few significant uses.
It has some uses, but it looks like they are in the minority; for the
most part, it seems that feature X is unnecessary.  At the same time,
many people are worried that feature X comes with significant costs.

At least, this is how it looks to me.  Maybe I've got something wrong.
If these two points are both accurate, this is an interesting observation.
If they're inaccurate, I'd be very interested to hear where they fail.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-31 Thread Seth David Schoen
David Wagner writes:

> So it seems that third-party-directed remote attestation is really where
> the controversy is.  Owner-directed remote attestation doesn't have these
> policy tradeoffs.
>
> Finally, I'll come back to the topic you raised by noting that your
> example application is one that could be supported with owner-directed
> remote attestation.  You don't need third-party-directed remote
> attestation to support your desired use of remote attestation.  So, TCPA
> or Palladium could easily fall back to only owner-directed attestation
> (not third-party-attestation), and you'd still be able to verify the
> software running on your own servers without incurring new risks of DRM,
> software lock-in, or whatever.
> 
> I should mention that Seth Schoen's paper on Trusted Computing anticipates
> many of these points and is well worth reading.  His notion of "owner
> override" basically converts third-party-directed attestation into
> owner-directed attestation, and thereby avoids the policy risks that so
> many have brought up.  If you haven't already read his paper, I highly
> recommend it.  http://www.eff.org/Infra/trusted_computing/20031001_tc.php

Thanks for the kind words.

Nikita Borisov has proposed an alternative to Owner Override which
Ka-Ping Yee has called "Owner Gets Key", and which is probably the
same as what you're discussing.

Most TC vendors have entered into this with some awareness of the
risks.  For example, the TCPA whitepaper that John Gilmore mentioned
here earlier specifically contemplates punishing people for using
disapproved software, without considering exactly why it is that
people would want to put themselves into a position where they could
be punished for doing that (given that they can't now!).  (In
deference to Unlimited Freedom's observations, it is not logically
impossible that people would ever want to put themselves into that
position; the TCPA whitepaper just didn't consider why they would.)

As a result, I have not had any TC vendor express much interest in
Owner Override or Owner Gets Key.  Some of them correctly pointed out
that there are interesting user interface problems associated with
making this usable yet resistant to social engineering attacks.  There
might be "paternalistic" reasons for not wanting to give end-users the
attestation keys, if you simply don't trust that they will use them
safely.  (But there's probably no technical way to have our cake and
eat it too: if you want to do paternalistic security, you can probably
then abuse it; if you want to give the owner total control, you can't
prevent the owner from falling victim to social engineering.)  Still,
the lack of a totally obvious secure UI hasn't stopped research from
taking place in related areas.  For example, Microsoft is reportedly
still trying to figure out how to make clear to people whether the
source of a particular UI element is the program they think it is, and
how to handle the installation of NGSCB trusted computing agents.
Secure UI is full of thorny problems.

I've recently been concerned about one problem with the Owner Override
or Owner Gets Key approaches.  This is the question of whether they
are particularly vulnerable to a man-in-the-middle attack.

Suppose that I own a computer with the TCG TPM "FOO" and you are a
server operator, and you and I trust each other and believe that we
have aligned interests.  (One example is the case where you are a bank
and we both want to be sure that I am using a pristine, unaltered
computing environment in order to access my account.  Neither of us
will benefit if I can be tricked into making bogus transactions.)

An attacker Mallory owns a computer with the TCG TPM "BAR".  We assume
that Mallory has already compromised my computer (because our ability
to detect when Mallory does that is the whole reason we're using
attestation in the first place).  Mallory replaces my web browser (or
financial software) with a web browser that he has modified to send
queries to him instead of to you, and to contain a root CA certificate
that makes it trust a root CA that Mallory controls.  (Alternatively,
he's just made the new web browser ignore the results of SSL
certificate validation entirely, though that might be easier to detect.)

Now when I go to your web site, my connection is redirected to Mallory's
computer, which proxies it and initiates a connection to you.  You ask
for an attestation as a condition of accessing your service.  Since I
have no particular reason to lie to you (I believe that your reason
for requesting the attestation is aligned with my interest), I direct
my computer to give you an attestation reflecting the actual state of
FOO's PCR values.  This attestation is generated and reflects a
signature by foo on a set of PCR values that show the results of
Mallory's tampering.  But Mallory does _not_ pass this attestation
along to you.  Instead, Mallory uses Owner Override or Owner Gets Key
to generate a new attestation reflectin

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-30 Thread Jerrold Leichter
| Rick Wash  wrote:
| >There are many legitimate uses of remote attestation that I would like to
| >see.  For example, as a sysadmin, I'd love to be able to verify that my
| >servers are running the appropriate software before I trust them to access
| >my files for me.  Remote attestation is a good technical way of doing that.
|
| This is a good example, because it brings out that there are really
| two different variants of remote attestation.  Up to now, I've been
| lumping them together, but I shouldn't have been.  In particular, I'm
| thinking of owner-directed remote attestation vs. third-party-directed
| remote attestation.  The difference is who wants to receive assurance of
| what software is running on a computer; the former mechanism allows to
| convince the owner of that computer, while the latter mechanism allows
| to convince third parties
|
| Finally, I'll come back to the topic you raised by noting that your
| example application is one that could be supported with owner-directed
| remote attestation.  You don't need third-party-directed remote
| attestation to support your desired use of remote attestation.  So, TCPA
| or Palladium could easily fall back to only owner-directed attestation
| (not third-party-attestation), and you'd still be able to verify the
| software running on your own servers without incurring new risks of DRM,
| software lock-in, or whatever
All of this is fine as long as there is a one-to-one association between
machines and "owners" of those machines.  Consider the example I gave
earlier:  A shared machine containing the standard distribution of the
trusted computing software.  All the members of the group that maintain the
software will want to have the machine attest, to them, that it is properly
configured and operating as intended.  We can call the group the owner of the
machine, and create a single key pair that all of them know.  But this is
brittle - shared secrets always are.  Any member of the group could then
modify the machine and, using his access to the private key, fake the "all
clear" indication.  Each participant should have his own key pair, since
attestation using a particular key pair only indicates security with respect
to those who don't know the private key of the pair - and a member of a
development team for the secure kernel *should* mistrust his fellow team
members!

So, again, there are simple instances where it will prove useful to be able
to maintain multiple sets of independent key pairs.

Now, in the shared distribution machine case, on one level team members should
be mutually suspicious, but on another they *do* consider themselves joint
owners of the machine - so it doesn't bother them that there are key pairs
to which they don't have access.  After all, those key pairs are assigned to
*other* owners of the machine!  But exactly the same mechanism could be used
to assign a key pair to Virgin Records - who we *don't* want to consider an
owner of the machine.

As long as, by owner, you mean a single person, or a group of people who
completely trust each other (with respect to the security problem we are trying
to solve); and as long as each machine only has only one owner; then, yes, one
key pair will do.  But as soon as "owner" can encompass mutually suspicious
parties, you need to have mutual independent key pairs - and then how you
use them, and to whom you grant them, becomes a matter of choice and policy,
not technical possibility.

BTW, even with a single owner, multiple independent key pairs may be useful.
Suppose I have reason to suspect that my private key has been leaked.  What
can I do?  If there is only one key pair around, I have to rebuild my machine
from scratch.  But if I had the forsight to generate *two* key pairs, one of
which I use regularly - and the other of which I sealed away in a safe - then
I can go to the safe, get out my "backup" key pair, and re-certify my machine.
In fact, it would probably be prudent for me to generate a whole bunch of
such backup key pairs, just in case.

You're trying to make the argument that feature X (here, remote attestation for
multiple mutually-suspicious parties) has no significant uses.  Historically,
arguments like this are losers.  People come up with uses for all kinds of
surprising things.  In this case, it's not even very hard.

An argument that feature X has uses, but also imposes significant and non-
obvious costs, is another thing entirely.  Elucidating the costs is valuable.
But ultimately individuals will make their own analysis of the cost/benefit
ratio, and their calculations will be different from yours.  Carl Ellison, I
think, argued that TCPA will probably never have large penetration because the
dominant purchasing factor for consumers is always initial cost, and the
extra hardware will ensure that TCPA-capable machines will always be more
expensive.  Maybe he's right.

Even if he isn't, as long as people believe that they have control over the
costs assoc

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-29 Thread David Wagner
Rick Wash  wrote:
>There are many legitimate uses of remote attestation that I would like to
>see.  For example, as a sysadmin, I'd love to be able to verify that my
>servers are running the appropriate software before I trust them to access
>my files for me.  Remote attestation is a good technical way of doing that.

This is a good example, because it brings out that there are really
two different variants of remote attestation.  Up to now, I've been
lumping them together, but I shouldn't have been.  In particular, I'm
thinking of owner-directed remote attestation vs. third-party-directed
remote attestation.  The difference is who wants to receive assurance of
what software is running on a computer; the former mechanism allows to
convince the owner of that computer, while the latter mechanism allows
to convince third parties.

If I understand correctly, TCPA and Palladium provide third-party-directed
remote attestation.  Intel, or Dell, or someone like that will generate
a keypair, embed it inside the trusted hardware that comes with your
computer, and you (the owner) are never allowed to learn the corresponding
private key.  This allows your computer to prove to Intel, or Dell, or
whoever, what software is running on your machine.  You can't lie to them.

In owner-directed remote attestation, you (the owner) would generate the
keypair and you (the owner) would learn the private key -- not Intel, or
Dell, or whoever.  This allows your computer to prove to you what software
is running on your machine.  However, you can't use this to convince Intel,
or Dell, or anyone else, what software is running your machine, unless they
know you and trust you.

I -- and others -- have been arguing that it is remote attestation that
is the key, from a policy point of view; it is remote attestation that
enables applications like DRM, software lock-in, and the like.  But this
is not quite right.  Rather, it is presence of third-party-directed remote
attestation that enables these alleged harms.

Owner-directed remote attestation does not enable these harms.  If I know
the private key used for attestation on my own machine, then remote attestation
is not very useful to (say) Virgin Records for DRM purposes, because I could
always lie to Virgin about what software is running on my machine.  Likewise,
owner-directed remote attestation doesn't come with the risk of software
lock-in that third-party-directed remote attestation creates.

So it seems that third-party-directed remote attestation is really where
the controversy is.  Owner-directed remote attestation doesn't have these
policy tradeoffs.

Finally, I'll come back to the topic you raised by noting that your
example application is one that could be supported with owner-directed
remote attestation.  You don't need third-party-directed remote
attestation to support your desired use of remote attestation.  So, TCPA
or Palladium could easily fall back to only owner-directed attestation
(not third-party-attestation), and you'd still be able to verify the
software running on your own servers without incurring new risks of DRM,
software lock-in, or whatever.

I should mention that Seth Schoen's paper on Trusted Computing anticipates
many of these points and is well worth reading.  His notion of "owner
override" basically converts third-party-directed attestation into
owner-directed attestation, and thereby avoids the policy risks that so
many have brought up.  If you haven't already read his paper, I highly
recommend it.  http://www.eff.org/Infra/trusted_computing/20031001_tc.php

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-29 Thread bear


On Tue, 23 Dec 2003, Seth David Schoen wrote:

>When attestation is used, it likely will be passed in a service like
>HTTP, but in a documented way (for example, using a protocol based on
>XML-RPC).  There isn't really any security benefit obtained by hiding
>the content of the attestation _from the party providing it_!

It's not the parties who are interested in security alone we're worried
about.  There is an advantage in profiling and market research, so I
expect anyone able to effectively subvert the protocols to attempt
to hide the content of remote attestataion.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-28 Thread Seth David Schoen
Antonomasia writes:

> From: "Carl Ellison" <[EMAIL PROTECTED]>
> 
> > Some TPM-machines will be owned by people who decide to do what I
> > suggested: install a personal firewall that prevents remote attestation.
> 
> How confident are you this will be possible ?  Why do you think the
> remote attestation traffic won't be passed in a widespread service
> like HTTP - or even be steganographic ?

The main answer is that the TPM will let you disable attestation, so
you don't even have to use a firewall (although if you have a LAN, you
could have a border firewall that prevented anybody on the LAN from
using attestation within protocols that the firewall was sufficiently
familiar with).

When attestation is used, it likely will be passed in a service like
HTTP, but in a documented way (for example, using a protocol based on
XML-RPC).  There isn't really any security benefit obtained by hiding
the content of the attestation _from the party providing it_!

Certainly attestation can be used as part of a key exchange so that
subsequent communications between local software and a third party are
hidden from the computer owner, but because the attestation must
happen before that key exchange is concluded, you can still examine
and destroy the attestation fields.

One problem is that a client could use HTTPS to establish a session
key for a session within which an attestation would be presented.
That might disable your ability to use the border firewall to block
the attestation, but you can still disable it in the TPM on that
machine if you control the machine.

The steganographic thing is implausible because the TPM is a passive
device which can't control other components in order to get them to
signal information.

-- 
Seth David Schoen <[EMAIL PROTECTED]> | Very frankly, I am opposed to people
 http://www.loyalty.org/~schoen/   | being programmed by others.
 http://vitanuova.loyalty.org/ | -- Fred Rogers (1928-2003),
   |464 U.S. 417, 445 (1984)

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-26 Thread Rick Wash
On Sun, Dec 21, 2003 at 08:55:16PM -0800, Carl Ellison wrote:
>
>   IBM has started rolling out machines that have a TPM installed. 
> [snip ...]
> Then again, TPMs cost money and I don't know any private individuals who are
> willing to pay extra for a machine with one.  Given that, it is unlikely
> that TPMs will actually become a popular feature.

Personally, I own a laptop (T30) with the TPM chip, and I paid extra for the
chip, but that is because I am a researcher interested in seeing what I can
get the chip to do.

I think that it is possible that they will sell a lot of TPM chips.  IBM is
currently calling it the "IBM Security Subsystem 2.0" or something like
that, which sounds a lot less threatening and more useful than "trusted
platform module".  It depends a lot on the marketing strategy.  If they can
make it sound useful, that will take them far.
 
>   Some TPM-machines will be owned by people who decide to do what I
> suggested: install a personal firewall that prevents remote attestation.
> With wider dissemination of your reasoning, that number might be higher than
> it would be otherwise.

Agreed.  The first thing I did when writing code was to figure out how to
turn it off.  THen I figured out how to enable most of the functionality
while disabling the built-in attestation key.
 
>   Meanwhile, there will be hackers who accept the challenge of
> defeating the TPM.  There will be TPM private keys loose in the world,
> operated by software that has no intention of telling the truth to remote
> challengers.  

And this will be simplier than most people think.  From what I understand
about the current TPM designs, the TPM chip is NOT designed to be
tamper-resistant.  The IBM researchers told me that it is possible to read
the secrets from the TPM chip with a standard bus reader.  I've been meaning
to wander over to the Computer Engineering department and borrow one of
those to verify this claim.

Based on this, it shouldn't be hard for a set of people to extract their 
keys from their TPM chips and spread them around the internet, emulating a
real TPM.  This I see as a major stumbling block for DRM systems based on
TCPA.  TCPA works very well against purely-software threats, but as far as
protecting against computer owners and determined attackers, I'm not so
sure.

>   At this point, a design decision by the TCPA (TCG) folks comes into
> play.  There are ways to design remote attestation that preserve privacy and
> there are ways that allow linkage of transactions by the same TPM.  
>
>   Either of these outcomes will kill the TCG, IMHO.

I agree.  This is why to make the TPM a success, specifically for something
like DRM, the companies advocating it will have to convince the users that
it is a good thing.  This is the same problem they have now.  They have to
make the users *want* to use the trusted DRM features and *not* want to
subvert them.   They can do this by making the DRM features mostly unseen
and providing cheap and effective ways for people to get the media that they
want in the formats that they want.  If they try to fight their own users,
there will be enough ways of getting around TCPA for the users to fight
back.
 
>   You postulated that someday, when the TPM is ubiquitous, some
> content providers will demand remote attestation.  I claim it will never
> become ubiquitous, because of people making my choice - and because it takes
> a long time to replace the installed base - and because the economic model
> for TPM deployment is seriously flawed.  

Well, there are a couple things that could change this.  If other, non-DRM
uses of the TPM chip become popular (say for example that everyone wants to
use it to encrypt their hard drive), then that could speed deployment of the
chip, since that functionality is also bundled with the remote attestation
functionality.  I know that then creates a market for a chip that does what
is needed without the remote attestation functionality, but it then becomes
business, not technology, that determines which people buy.

> If various service or content providers elect not to allow me service
> unless I do remote attestation, I then have 2 choices: use the friendly
> web service that will lie for me - or decline the content or service.

Correct.  However, this is where copyright and other government-granted
monopolies come into play.  If I want a specific piece of copyrighted
material (say, a song), I have to either deal with the copyright owner
(RIAA) on their terms (remote attestation), not get the song, or break the
law.  None of those three alternatives sound very good.   The best chance is
education of the masses, so everyone chooses one of the latter two and makes
it economically infeasible for the RIAA to maintain their draconian terms.
Then we have a useful piece of hardware in our computers (TCPA), subsidised
largely by people like the RIAA, but who can't use it for economic reasons.
That would be the ideal out

RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-23 Thread Antonomasia
From: "Carl Ellison" <[EMAIL PROTECTED]>

>   Some TPM-machines will be owned by people who decide to do what I
> suggested: install a personal firewall that prevents remote attestation.

How confident are you this will be possible ?  Why do you think the
remote attestation traffic won't be passed in a widespread service
like HTTP - or even be steganographic ?

-- 
##
# Antonomasia   ant notatla.org.uk   #
# See http://www.notatla.org.uk/ #
##

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-23 Thread Anne & Lynn Wheeler
At 03:03 PM 12/21/2003 -0800, Seth David Schoen wrote:
Some people may have read things like this and mistakenly thought that
this would not be an opt-in process.  (There is some language about
how the user's platform takes various actions and then "responds" to
challenges, and perhaps people reasoned that it was responding
autonomously, rather than under its user's direction.)
my analogy ... at least in online scenario has been to wild, wild west 
before there were traffic conventions, traffic signs, lane markers, traffic 
lights, standards for vehicles ... misc. traffic rules about operating an 
unsafe vehicle and driving recklessly, various minimums about traffic 
regulations, and things like insurance requirements to cover the cost of 
accidents. infected machines that do distributed DOS  attacks ... might be 
considered analogous to large overloaded trucks w/o operational breaks 
(given rise to truck inspection and weighing stations).  many ISPs are 
already monitoring, accounting and controlling various kinds of activity 
with respect to amount of traffic, simultaneous log-ins, etc.  If there are 
sufficient online incidents ... then there could be very easy to declare 
machines that become infected and are used as part of various unacceptable 
behavior to have then declared unsafe vehicles and some sort of insurace be 
required to cover the costs of associated with unsafe and reckless driving 
on the internet. Direct costs to individuals may go up ... but the unsafe 
and reckless activities currently going on represent enormous 
infrastructure costs.  Somewhat analogy to higher insurance premiums for 
less safe vehicles, government minimums for crash tests, bumper 
conventions, seat belts, air bags, etc.

part of the issue is that some number of the platforms never had original 
design point of significant interaction on a totally open and free internet 
(long ago and far away, vehicles that didn't have bumpers, crash tests, 
seat belts, air bags, safety glass, etc). Earlier in the original version 
of this thread ... I made reference to some number of systems from 30 or 
more years ago ... that were designed to handle such environments  and 
had basic security designed in from the start ... were found to be not 
subject to majority of the things that are happening to lots of the current 
internet connected platforms.
http://www.garlic.com/~lynn/aadsm16.htm#8 example: secure computing kernel 
needed

misc. past analogies to unsafe and reckless driving on the internet:
http://www.garlic.com/~lynn/aadsm14.htm#14 blackhole spam => mail 
unreliability (Re: A Trial Balloon to Ban Email?)
http://www.garlic.com/~lynn/aadsm14.htm#15 blackhole spam => mail 
unreliability (Re: A Trial Balloon to Ban Email?)
http://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#28 Internet like city w/o traffic 
rules, traffic signs, traffic lights  and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#29 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#30 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#31 Internet like city w/o traffic 
rules, traffic signs, traffic lights   and traffic enforcement
http://www.garlic.com/~lynn/2002p.html#27 Secure you PC or get kicked off 
the net?
http://www.garlic.com/~lynn/2003i.html#17 Spam Bomb
http://www.garlic.com/~lynn/2003m.html#21 Drivers License required for surfing?

--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-22 Thread Carl Ellison
Seth,

that was a very good and interesting reply.  Thank you.

IBM has started rolling out machines that have a TPM installed.  If
other companies do that too (and there might be others that do already -
since I don't follow this closely) then gradually the installed base of
TPM-equipped machines will grow.  It might take 10 years - or even more -
before every machine out there has a TPM.  However, that day may well come.
Then again, TPMs cost money and I don't know any private individuals who are
willing to pay extra for a machine with one.  Given that, it is unlikely
that TPMs will actually become a popular feature.

Some TPM-machines will be owned by people who decide to do what I
suggested: install a personal firewall that prevents remote attestation.
With wider dissemination of your reasoning, that number might be higher than
it would be otherwise.

Meanwhile, there will be hackers who accept the challenge of
defeating the TPM.  There will be TPM private keys loose in the world,
operated by software that has no intention of telling the truth to remote
challengers.  There might even be one or more web services out there with a
pool of such keys, offering to do an attestation for you telling whatever
lie you want to tell.  With such a service in operation, it is doubtful that
a service or content provider would put much faith in remote attestation -
and that, too, might kill the effort.

At this point, a design decision by the TCPA (TCG) folks comes into
play.  There are ways to design remote attestation that preserve privacy and
there are ways that allow linkage of transactions by the same TPM.  If the
former is chosen, then the web service needs very few keys.  If the privacy
protection is perfect, then the web service needs only 1 key.  If the
privacy violation is very strong, then the web service won't work, but the
TCG folks will have set themselves up for a massive political campaign
around its violation of user privacy.

Either of these outcomes will kill the TCG, IMHO.

This is the reason that, when I worked for a hardware company active
in the TCPA(TCG), I argued strongly against supporting remote attestation.
I saw no way that it could succeed.

Meanwhile, I am no longer in that company.  I have myself to look
out for.  If I get a machine with a TPM, I will make sure I have the
firewall installed.  I will use the TPM for my own purposes and let the rest
of the world think that I have an old machine with no TPM.

You postulated that someday, when the TPM is ubiquitous, some
content providers will demand remote attestation.  I claim it will never
become ubiquitous, because of people making my choice - and because it takes
a long time to replace the installed base - and because the economic model
for TPM deployment is seriously flawed.  If various service or content
providers elect not to allow me service unless I do remote attestation, I
then have 2 choices: use the friendly web service that will lie for me - or
decline the content or service.

The scare scenario you paint is one in which I am the lone voice of
concern floating in a sea of people who will happily give away their privacy
and allow some service or content provider to demand this technology on my
end.  In such a society, I would stand out and be subject to discrimination.
This is not a technical problem. This is a political problem. If that is a
real danger, then we need to educate those people.

RIAA and MPAA have been hoping for some technological quick fix to
let them avoid facing the hard problem of dealing with people who don't
think the way they would like people to think.  It seems to me that you and
John Gilmore and others are doing exactly the same thing - hoping for
technological censorship to succeed so that you can avoid facing the hard
problem of dealing with people who don't think the way they should (in this
case, the people who happily give away their privacy and accept remote
attestation in return for dancing pigs).  I don't have the power to stop
this technology if folks decide to field it.  I have only my own reason and
skills.

 - Carl


+--+
|Carl M. Ellison [EMAIL PROTECTED]  http://theworld.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+ 

> -Original Message-
> From: Seth David Schoen [mailto:[EMAIL PROTECTED] On Behalf Of 
> Seth David Schoen
> Sent: Sunday, December 21, 2003 3:03 PM
> To: Carl Ellison
> Cc: 'Stefan Lucks'; [EMAIL PROTECTED]
> Subject: Re: Difference between TCPA-Hardware and a smart 
> card (was: example: secure computing kernel needed)

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-22 Thread Seth David Schoen
Carl Ellison writes:

> I, meanwhile, never did buy the remote attestation argument for high price
> content.  It doesn't work.  So, I looked at this as an engineer.  "OK, I've
> got this hardware. If remote attestation is worthless, then I can and should
> block that (e.g., with a personal firewall).  Now, if I do that, do I have
> anything of value left?"  My answer was that I did - as long as I could
> attest about the state of the software to myself, the machine owner.
> 
> [...]
> 
> What we need is some agent of mine - a chip - that:
> 1) has access to the machine guts, so it can verify S/W state
> 2) has a cryptographic channel to me, so it can report that result to me
> and
> 3) has its own S/W in a place where no attacker could get to it, even if
> that attacker had complete control over the OS.
> 
> The TCPA/TPM can be used that way.  Meanwhile, the TPM has no channel to the
> outside world, so it is not capable of doing remote attestation by itself.
> You need to volunteer to allow such communications to go through. If you
> don't like them, then block them.  Problem solved.  This reminds me of the
> abortion debate bumper sticker.  "If you're against abortion, don't have
> one."

The only difficulty here is the economic effect if an attestation
capability is really ubiquitous, since people you interact with can
tell whether you chose to offer them attestations or not.  (Imagine
if there was a way to tell whether someone had had an abortion in
the past by looking at her.  That would have a major effect on the
decision to have an abortion, without directly affecting the
availability of abortion services at all.  It would all be a matter
of secondary effects.)

My two favorite examples currently are the polygraph machine and
genetic screening.  Of course, both of these are opt-in technologies;
nobody will come up to you on the street and force you to take a
polygraph, and nobody will come up to you and stab you to collect
blood for genetic screening.  (There are supposedly a few cases of
genetic testing being done surreptitiously, and that might become
more common in the future.)  On the other hand, people can conceivably
condition certain interactions or benefits on the results of a
polygraph test or a genetic test for some condition.  The most obvious
example is employment: someone can refuse to hire you unless you
submit to one or the other of these tests.  As a result of various
concerns about this, Congress has now regulated the use of both of
these technologies by employers in the U.S.  Whether or not people
agree with that decision by Congress, they should be able to see that
the very _existence_ of these opt-in technologies, and their potential
availability to employers, would make many prospective employees worse
off than they would have been if the technologies had not been
developed.

(Oh, and then there are the ways insurers would like to use genetic
tests.  I'm sure some insurers wouldn't have minded subjecting their
insureds to lie detector tests, either, when asking them whether they
ever smoke.)

On-line interactions are becoming terribly frequent, and it's common
for on-line service providers to wish to know about you (or know what
software you use or try to get you to use particular other software
instead).  In the current environment you can use the personal
firewalls you mention, and a host of other techniques, to prevent
on-line services from learning much more about you than you would like
them to -- and in principle they can't determine whether or not you're
using the programs they would prefer.

John Gilmore recently quoted in this thread the TCPA white paper
"Building a Foundation of Trust in the PC", which says

[M]aking the platform secure requires a ubiquitous
solution, supported by vendors throughout the industry.
[... A]t some point, all PCs will have the ability to
be trusted to some minimum level -- a level that is
higher than possible today -- and to achieve this level
of trust in the same way.

[...] Every PC will have hardware-based trust, and
every piece of software on the system will be able to
use it.

If that happens, publishers and service providers can use their
leverage over software choices to gain a lot more power over computer
owners than they have right now.

"Building a Foundation of Trust in the PC" suggests this:

[B]efore making content available to a subscriber, it is
likely that a service provider will need to know that the
remote platform is trustworthy.  [... T]he digital signature
prevents tampering and allows the challenger to verify the
signature.  If the signature is verified, the challenger
can then determine whether the identity metrics are
trustworthy.  If so, the challenger, in this case the
service provider, can then deliver the content.

Some people may have read things like this and mistakenly though

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-22 Thread Anne & Lynn Wheeler
On Fri, 2003-12-19 at 12:40, Ernst Lippe wrote:
> It is not really true that there are so few smartcards. Almost every
> mobile phone contains one (the SIM module is a smartcard).
> 
> Also the situation in Europe is quite different from the USA.
> Electronic purses on smart cards are pretty common here, especially in
> France and the Netherlands, where most adults have at least one.
> 
> But it is true that there are only very few smart card enabled
> applications.  I have worked on several projects for multifunctional
> use of these smart cards and almost all of them were complete failures.

one can claim that the SIM module isn't a smartcard as per the 
original design point  it is a mobile phone that happens
to leverage the smartcard manufactoring process.

my assertion is that the original smartcard design point was
as a portable computing infrastructure that didn't have 
portable input/output technology. a huge investment went into
standards (so that these cards could be carried around and
still interoperate with various input/output stations) and
volume manufactoring faclities.  Just because they are the
same physical components doesn't mean that they are the same
business.

my observation has been that the stored-value smartcards 
were an economic trade-off  supposedly because of either
1) extremely high telco fees and/or 2) availability problems
with telco connectivity  giving everybody smartcards 
and all merchants "offline" (aka smartcard) point-of-sale 
terminals  ... was less expensive than an online paradigm.

in the US  with ubiquitous and inexpensive telco 
availability ... it was less expensive to go with the
standard online POS terminals and stored-value using
the traditional magstripe interface (aka it was difficult
to justify the increased chip expense based on any 
possible savings in telco &/or online transaction costs).

my contention in the AADS chip strawman scenario ... 
http://www.garlic.com/~lynn/index.html#aads
that with aggresive focus on compelling business use of hardware
token (regardless of form factor) as an authentication device,
it should be possible to justify the hardware token just
based on fraud mitigation. with reasonable assumption about online
connectivity becoming universal and inexpensive  it
is difficult to see any business justification for anything
other than fraud mitigation. If the only business justification
is authentication (for fraud mitigation), it isn't necessary
to have multi-function features supported in the hardware token.

If there is no function/feature needed in a hardware token
(other than authentication for an online environment), the
the provisioning for hardware tokens (regardless of form
factor) is significantly simplified ... aka KISS.

The current provisioning convention for magstripe cards
is there because the magstripe carries effectively shared-secrets
for authentication ... which by simple security 101 rules
says that there has to be a unique shared secret per 
security domain (and every financial institution is their
own security domain).

To some extent the provisioning of financial smartcards 
just continues to utilize the magstripe model. In addition,
given offline transaction scenario and possible use of the
card for non-authentication purposes, additional provisioning
of the chip is required to load business rules so that
its use is aligned with the financial institution issuing
the card.

The assertion then is that in the scenario where the 
hardware token is purely an authentication device, most
of the additional provisioning is eliminated (and becomes
superfluous).

There is typically one additional argument used for
institutional delivered hardware tokens (smartcards), even
if there is no provisioning required ... which is that they
tightly control the process so that the chip eventually
delivered to the end-user can be assumed to meet some specified
trust level.

So a person shows up at the doorstep with their own hardware
token and wants to use it as their authentication device
(whether it is a financial institution for electronic financial
transactions, an employer for door badge access, or a gov.
agency)  the institution will frequently respond something
about "how can they trust the token?"

So what might convince institutions to accept a consumer
presented hardware token for authentication ... as opposed
to mandating that the only hardware token that they will
trust are the ones provided by the institution.

-- 
Anne & Lynn Wheeler -  http://www.garlic.com/~lynn/ 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-22 Thread Ben Laurie
Carl Ellison wrote:
We see here a difference between your and my sides of the Atlantic.  Here in
the US, almost no one has a smart card.
Of those cards you carry, how many are capable of doing public key
operations?  A simple memory smartcard doesn't count for what we were
talking about.
I don't know. If you can tell me how to find out, I'd be happy to 
investigate. I have quite a few that are no longer needed, so 
destructive investigation is possible :-)

BTW, I forgot the two smartcards that are used by my Sky satellite TV stuff.

There are other problems with doing TCPA-like operations with a smartcard,
but I didn't go into those.  The biggest one to chew on is that I, the
computer owner, need verification that my software is in good shape.  My
agent in my computer (presumably the smartcard) needs a way to examine the
software state of my computer without relying on any of the software in my
computer (which might have been corrupted, if the computer's S/W has been
corrupted).  This implies to me that my agent chip needs a H/W path for
examining all the S/W of my computer.  That's something the TPM gives us
that a smartcard doesn't (when that smartcard goes through a normal device
driver to access its machine).
I'm not arguing with this - just the economic argument about number of 
smartcards.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Anne & Lynn Wheeler
At 10:51 AM 12/16/2003 +0100, Stefan Lucks wrote:

I agree with you: A good compromise between security and convenience is an
issue, when you are changing between different smart cards. E.g., I could
imagine using the smart card *once* when logging into my bank account,
and then only needing it, perhaps, to authorise a money transfer.
This is a difficult user interface issue, but something we should be able
to solve.
One problem of TCPA is the opposite user interface issue -- the user has
lost control over what is going on. (And I believe that this originates
much of the resistance against TCPA.)
In sci.crtypt, there has been a thread discussing does OTP (one-time-pad) 
and how does integrity and authentication play and somewhat subtread about 
does authentication of a message  involve checking the integrity of the 
contents and/or checking the origin of message. A security taxonomy, PAIN:
* privacy (aka thinks like encryption)
* authentication (origin)
* integrity (contents)
* non-repudiation

http://www.garlic.com/~lynn/2003p.html#4 Does OTP need authentication?
http://www.garlic.com/~lynn/2003p.html#6 Does OTP need authentication?
http://www.garlic.com/~lynn/2003p.html#17 Does OTP need authentication?
One of the issues is that privacy, authentication, and integrity are 
totally different business processes and that the same technology, lets say 
involving keys might be involved in all three, aka digital signatures (& 
public/private keys) can be used to simultaneously provide for 
authentication (of sender) and integrity )of message contents).

Both privacy (encryption) and authentication (say digital signatures) can 
involve keys that need protecting; privacy because key access needs to be 
controlled to prevent unauthorized access to data, authentication because 
unauthorized access to keys could lead to impersonation.

In the authentication case, involving public/private keys  the business 
requirement has sometimes led to guidelines that the private key is 
absolutely protected and things like key escrow is not allowed because it 
could contributed to impersonation.

In the privacy csse, involving public/private keys  ... the business 
requirement can lead to guidelines that require mandated escrow of private 
key(s) because of  business continuity issues.

This can create ambiguity where the same technology can be used for both 
authentication and privacy, but because the business processes are 
different, there can be mandated requirement that the same keys are never 
used for both authentication and privacy ... and it is mandated that 
authentication keys are never escrowed and that privacy keys are always 
escrowed.

TCPA chip can also be used to protect private keys used in authentication 
 either authentication of the hardware component as its own entity  
say like a router in a large network, or possibly implied authentication of 
a person that "owns" or possesses the hardware component.

An authentication taxonomy is 3-factor authentication:
* something you have
* something you know
* something you are
A hardware token (possibly in chipcard form factor) can be designed to 
generate a unique public/private key pair inside the token and that the 
private key never leaves the chip. Any digital signature that can be 
verified by the corresponding public key can be used to imply "something 
you have" authentication (i.e. the digital signature is assumed to have 
originated from a specific hardware token). A hardware token can also be 
designed to only operate in specific way when the correct PIN/password has 
been entered  in which case the digital signature can imply two-factor 
authentication, both "something you have" and "something you know".

From a business process standpoint it would be perfectly consistent to 
mandate that there is never key escrow for keys involved in authentication 
business process while at the same time mandating key escrow for keys 
involved in privacy.

At issue in business continuity are business requirements for things like 
no single point of failure,  offsite storage of backups, etc. The threat 
model is 1) data in business files can be one of its most valuable assets, 
2) it can't afford to have unauthorized access to the data, 3) it can't 
afford to loose access to data, 4) encryption is used to help prevent 
unauthorized access to the data, 5) if the encryption keys are protected by 
a TCPA chip, are the encryption keys recoverable if the TCPA chip fails?

--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Ernst Lippe
On Mon, 15 Dec 2003 19:02:06 -0500 (EST)
Jerrold Leichter <[EMAIL PROTECTED]> wrote:

> However, this advantage is there only because there are so few smart cards,
> and so few smart card enabled applications, around.

It is not really true that there are so few smartcards. Almost every
mobile phone contains one (the SIM module is a smartcard).

Also the situation in Europe is quite different from the USA.
Electronic purses on smart cards are pretty common here, especially in
France and the Netherlands, where most adults have at least one.

But it is true that there are only very few smart card enabled
applications.  I have worked on several projects for multifunctional
use of these smart cards and almost all of them were complete failures.

Ernst Lippe

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Carl Ellison
Stefan,

I replied to much of this earlier, so I'll skip those parts.

 - Carl

+--+
|Carl M. Ellison [EMAIL PROTECTED]  http://theworld.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+ 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Stefan Lucks
> Sent: Tuesday, December 16, 2003 1:02 AM
> To: Carl Ellison
> Cc: [EMAIL PROTECTED]
> Subject: RE: Difference between TCPA-Hardware and a smart 
> card (was: example: secure computing kernel needed)
> 
> On Mon, 15 Dec 2003, Carl Ellison wrote:


> The point is that Your system is not supposed to prevent You 
> from doing
> anything I want you not to do! TCPA is supposed to lock You 
> out of some
> parts of Your system.

This has nothing to do with the TCPA / TPM hardware. This is a political
argument about the unclean origins of TCPA (as an attempt to woo Hollywood).

I, meanwhile, never did buy the remote attestation argument for high price
content.  It doesn't work.  So, I looked at this as an engineer.  "OK, I've
got this hardware. If remote attestation is worthless, then I can and should
block that (e.g., with a personal firewall).  Now, if I do that, do I have
anything of value left?"  My answer was that I did - as long as I could
attest about the state of the software to myself, the machine owner.

This required putting the origins of the project out of my head while I
thought about the engineering.  That took effort, but paid off (to me).

> 
> 
> [...]
> > If it were my machine, I would never do remote attestation. 
>  With that
> > one choice, I get to reap the personal advantages of the TPM while
> > disabling its behaviors that you find objectionable 
> (serving the outside
> > master).
> 
> I am not sure, whether I fully understand you. If you mean that TCPA
> comes with the option to run a secure kernel where you (as 
> the owner and
> physical holder of the machine running) have full control 
> over what the
> system is doing and isn't doing -- ok, that is a nice thing. 
> On the other
> hand, we would not need a monster such as TCPA for this.

What we need is some agent of mine - a chip - that:
1) has access to the machine guts, so it can verify S/W state
2) has a cryptographic channel to me, so it can report that result to me
and
3) has its own S/W in a place where no attacker could get to it, even if
that attacker had complete control over the OS.

The TCPA/TPM can be used that way.  Meanwhile, the TPM has no channel to the
outside world, so it is not capable of doing remote attestation by itself.
You need to volunteer to allow such communications to go through. If you
don't like them, then block them.  Problem solved.  This reminds me of the
abortion debate bumper sticker.  "If you're against abortion, don't have
one."

 - Carl

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Carl Ellison
We see here a difference between your and my sides of the Atlantic.  Here in
the US, almost no one has a smart card.

Of those cards you carry, how many are capable of doing public key
operations?  A simple memory smartcard doesn't count for what we were
talking about.

There are other problems with doing TCPA-like operations with a smartcard,
but I didn't go into those.  The biggest one to chew on is that I, the
computer owner, need verification that my software is in good shape.  My
agent in my computer (presumably the smartcard) needs a way to examine the
software state of my computer without relying on any of the software in my
computer (which might have been corrupted, if the computer's S/W has been
corrupted).  This implies to me that my agent chip needs a H/W path for
examining all the S/W of my computer.  That's something the TPM gives us
that a smartcard doesn't (when that smartcard goes through a normal device
driver to access its machine).

 - Carl


+--+
|Carl M. Ellison [EMAIL PROTECTED]  http://theworld.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+ 

> -Original Message-
> From: Ben Laurie [mailto:[EMAIL PROTECTED] 
> Sent: Friday, December 19, 2003 2:42 AM
> To: Carl Ellison
> Cc: 'Stefan Lucks'; [EMAIL PROTECTED]
> Subject: Re: Difference between TCPA-Hardware and a smart 
> card (was: example: secure computing kernel needed)
> 
> Carl Ellison wrote:
> > It is an advantage for a TCPA-equipped platform, IMHO.  
> Smart cards cost
> > money. Therefore, I am likely to have at most 1.
> 
> If I glance quickly through my wallet, I find 7 smartcards 
> (all credit 
> cards). Plus the one in my phone makes 8. So, run that "at most 1" 
> argument past me again?
> 
> Cheers,
> 
> Ben.
> 
> -- 
> http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
> 
> "There is no limit to what a man can do or how far he can go if he
> doesn't mind who gets the credit." - Robert Woodruff
> 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Ben Laurie
Carl Ellison wrote:
It is an advantage for a TCPA-equipped platform, IMHO.  Smart cards cost
money. Therefore, I am likely to have at most 1.
If I glance quickly through my wallet, I find 7 smartcards (all credit 
cards). Plus the one in my phone makes 8. So, run that "at most 1" 
argument past me again?

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Peter Gutmann
Stefan Lucks <[EMAIL PROTECTED]> writes:

>Currently, I have three smart cards in my wallet, which I did not want to own
>and which I did never pay for. I never used any of them. 

Conversation from a few years ago, about multifunction smart cards:

 -> Multifunction smart cards are great, because they'll reduce the number of
[smart] cards we'll have to carry around.

 <- I'm carrying zero smart cards, so it's working already!

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Stefan Lucks
On Mon, 15 Dec 2003, Jerrold Leichter wrote:

> | This is quite an advantage of smart cards.
> However, this advantage is there only because there are so few smart cards,
> and so few smart card enabled applications, around.

Strangely enough, Carl Ellison assumed that you would have at most one
smart card, anyway. I'd rather think you are right, here.

> Really secure mail *should* use its own smart card.  When I do banking, do
> I have to remove my mail smart card?  Encryption of files on my PC should
> be based on a smart card.  Do I have to pull that one out?  Does that mean
> I can't look at my own records while I'm talking to my bank?  If I can only
> have one smart card in my PC at a time, does that mean I can *never* cut and
> paste between my own records and my on-line bank statement?  To access my
> files and my employer's email system, do I have to have to trust a single
> smart card to hold both sets of secrets?

I agree with you: A good compromise between security and convenience is an
issue, when you are changing between different smart cards. E.g., I could
imagine using the smart card *once* when logging into my bank account,
and then only needing it, perhaps, to authorise a money transfer.

This is a difficult user interface issue, but something we should be able
to solve.

One problem of TCPA is the opposite user interface issue -- the user has
lost control over what is going on. (And I believe that this originates
much of the resistance against TCPA.)

> Ultimately, to be useful a trusted kernel has to be multi-purpose, for
> exactly the same reason we want a general-purpose PC, not a whole bunch
> of fixed- function appliances.  Whether this multi-purpose kernel will
> be inside the PC, or a separate unit I can unplug and take with me, is a
> separate issue. Give the current model for PC's, a separate "key" is
> probably a better approach.

Agreed!

> However, there are already experiments with "PC in my pocket" designs:
> A small box with the CPU, memory, and disk, which can be connect to a
> small screen to replace a palmtop, or into a unit with a big screen, a
> keyboard, etc., to become my desktop.  Since that small box would have
> all my data, it might make sense for it to have the trusted kernel.
> (Of course, I probably want *some* part to be separate to render the box
> useless is stolen.)

Agreed again!

> | There is nothing wrong with the idea of a trusted kernel, but "trusted"
> | means that some entity is supposed to "trust" the kernel (what else?). If
> | two entities, who do not completely trust each other, are supposed to both
> | "trust" such a kernel, something very very fishy is going on.
> Why?  If I'm going to use a time-shared machine, I have to trust that the
> OS will keep me protected from other users of the machine.  All the other
> users have the same demands.  The owner of the machine has similar demands.

Actually, all users have to trust the owner (or rather the sysadmin).

The key words are "have to trust"! As you wrote somewhere below:

> Part of the issue with TCPA is that the providers of the kernel that we
> are all supposed to trust blindly are also going to be among those who
> will use it heavily.  Given who those producers are, that level of trust
> is unjustifiable.

I entirely agree with you!

> | More than ten years ago, Chaum and Pedersen

[...]

> |+---+ +-+ +---+
> || Outside World | <-> | Your PC | <-> | TCPA-Observer |
> |+---+ +-+ +---+
> |
> | TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and is
> | thus open to abuse.

> I remember looking at this paper when it first appeared, but the details
> have long faded.  It's an alternative mechanism for creating trust:
> Instead of trusting an open, independently-produced, verified
> implementation, it uses cryptography to construct walls around a
> proprietary, non-open implementation that you have no reason to trust.

Please re-read the paper!

First, it is not a mechanism for *creating* trust.

It is rather a trust-avoidance mechanism! You are not trusting the
observer at all, and you don't need to. The outsider is not trusting you
or your PC at all, and she donesn't need to.

Second, how on earth did you get the impression that Chaum/Pedersen is
about proprietary non open implenentations?

Nothing stops people from producing independent and verified
implementations. As a matter of fact, since people can concentrate on
writing independent and verified implementations for the sofware on "Your
PC", providing an independently produced and verified implementation woud
be much much simpler than ever providing such an implementation for the
TCPA hardware.

Independent implementations of the observer's soft- and hardware are
simpler than in the case of TCPA as well, but this is a minor issue. You
don't need to trust the observer, so you don't care about independent a

RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Stefan Lucks
On Mon, 15 Dec 2003, Carl Ellison wrote:

[I wrote]
> > The first difference is obvious. You can plug in and later
> > remove a smart
> > card at your will, at the point of your choice. Thus, for
> > home banking with
> > bank X, you may use a smart card, for home banking with bank Y you
> > disconnect the smart card for X and use another one, and before online
> > gambling you make sure that none of your banking smart cards
> > is connected
> > to your PC. With TCPA, you have much less control over the
> > kind of stuff
> > you are using.
> >
> > This is quite an advantage of smart cards.
>
> It is an advantage for a TCPA-equipped platform, IMHO.  Smart cards cost
> money. Therefore, I am likely to have at most 1.

Strange! Currently, I have three smart cards in my wallet, which I did not
want to own and which I did never pay for. I never used any of them. They
are packaged with some ATM cards (using conventional magnetic-stripe
technoloby) and implement a "Geldkarte".  (Since a couple of years, German
banks try to push their customers into using the "Geldkarte" for
electronig money, by packaging the smart cards together with ATM cards.
For me, there ares still too few dealers accepting the "Geldkarte", so I
never use it.)

OK, the banks are paying for the smart cards they give to their customers
for free. But they would not do so, if these cards where expensive.

BTW, even if you have only one, a smart card has the advantage that you
can physically remove it.

> TCPA acts like my hardware crypto module and in that one hardware
> module, I am able to create and maintain as many private keys as I want.
> (The keys are wrapped by the TPM but stored on the disk - even backed up
> - so there's no storage limit.)

A smart card can do the same.

> Granted, you have to make sure that the S/W that switches among (selects)
> private keys for particular uses does so in a way you can trust.  The
> smartcard has the advantage of being a physical object.

Exact!

> However, if you can't trust your system S/W, then how do you know that
> the system S/W was using a private key on the smart card you so happily
> plugged in rather than one of its own (the same one all the time)?

The point is that Your system is not supposed to prevent You from doing
anything I want you not to do! TCPA is supposed to lock You out of some
parts of Your system.


[...]
> If it were my machine, I would never do remote attestation.  With that
> one choice, I get to reap the personal advantages of the TPM while
> disabling its behaviors that you find objectionable (serving the outside
> master).

I am not sure, whether I fully understand you. If you mean that TCPA
comes with the option to run a secure kernel where you (as the owner and
physical holder of the machine running) have full control over what the
system is doing and isn't doing -- ok, that is a nice thing. On the other
hand, we would not need a monster such as TCPA for this.


> Of course, you're throwing out a significant baby with that bath water.
> What if it's your secrets you want protected on my machine?  It doesn't
> have to be the RIAA's secrets.  Do you object just as much when it's
> your secrets?

Feel free to buy an overpriced second-hand car from me. I promise, I won't
object! ;-)

But seriously, with or without remote attestation, I would not consider my
secrets safe on your machine. If you can read (or play, in the case of
multimedia stuff) my secrets on your machine, I can't prevent you from
copying. TCPA (and remote attestation) can make copying less convenient
for you (and the RIAA is probably pleased with this), but can't stop a
determined adversary.

In other words, I don't think it will be too difficult to tamper with the
TCPA hardware, and to circumwent remote attestation.

Winning against the laws of information theory is not simpler than winning
against the laws of thermodynamics -- both are impossible!

> > Chaum and Pedersen  [...]

> > TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and
> > is thus open to abuse.

Let me stress:

  -- Good security design means to separate duties.

  -- TCPA does exactly the opposite: It deliberately mixes duties.

> I haven't read that paper - will have to.  Thanks for the reference.
> However, when I do read it, what I will look for is the non-network
> channel between the observer and the PC.  Somehow, the observer needs to
> know that the PC has not been tampered with and needs to know, securely
> (through physical security) the state of that PC and its S/W.

It doesn't need to know, and it can't know anyway. All it needs to know
whether itself has been tampered with. Well, ... hm ... it more or less
assumes itself has not been tampered with (but so does the TCPA hardware).

> Where can I get one of those observers?  I want one now.

You get them at the same place where you get TCPA hardware: In a possible
future. ;-)


-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
 

RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Carl Ellison
Stefan,

I have to disagree on most of these points.

See below.

 - Carl

+--+
|Carl M. Ellison [EMAIL PROTECTED]  http://theworld.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+ 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Stefan Lucks
> Sent: Monday, December 15, 2003 9:34 AM
> To: Jerrold Leichter
> Cc: Ian Grigg; Paul A.S. Ward; [EMAIL PROTECTED]
> Subject: Difference between TCPA-Hardware and a smart card 
> (was: example: secure computing kernel needed)
> 
> On Sun, 14 Dec 2003, Jerrold Leichter wrote:
> 
> > Which brings up the interesting question:  Just why are the 
> reactions to
> > TCPA so strong?  Is it because MS - who no one wants to trust - is
> > involved?  Is it just the pervasiveness:  Not everyone has 
> a smart card,
> > but if TCPA wins out, everyone will have this lump inside of their
> > machine.
> 
> There are two differences between TCPA-hardware and a smart card.
> 
> The first difference is obvious. You can plug in and later 
> remove a smart
> card at your will, at the point of your choice. Thus, for 
> home banking with
> bank X, you may use a smart card, for home banking with bank Y you
> disconnect the smart card for X and use another one, and before online
> gambling you make sure that none of your banking smart cards 
> is connected
> to your PC. With TCPA, you have much less control over the 
> kind of stuff
> you are using.
> 
> This is quite an advantage of smart cards.

It is an advantage for a TCPA-equipped platform, IMHO.  Smart cards cost
money. Therefore, I am likely to have at most 1.  TCPA acts like my hardware
crypto module and in that one hardware module, I am able to create and
maintain as many private keys as I want.  (The keys are wrapped by the TPM
but stored on the disk - even backed up - so there's no storage limit.)

Granted, you have to make sure that the S/W that switches among (selects)
private keys for particular uses does so in a way you can trust.  The
smartcard has the advantage of being a physical object. However, if you
can't trust your system S/W, then how do you know that the system S/W was
using a private key on the smart card you so happily plugged in rather than
one of its own (the same one all the time)?

> 
> The second point is perhaps less obvious, but may be more important.
> Usually, *your* PC hard- and software is supposed to protect *your*
> assets and satisfy *your* security requirements. The 
> "trusted" hardware
> add-on in TCPA is supposed to protect an *outsider's* assets 
> and satisfy
> the *outsider's* security needs -- from you.
> 
> A TCPA-"enhanced" PC is thus the servant of two masters -- 
> your servant
> and the outsider's. Since your hardware connects to the 
> outsider directly,
> you can never be sure whether it works *against* you by giving the
> outsider more information about you than it should (from your point if
> view).

TCPA includes two different things: wrapping or "sealing" of secrets -
something in service to you (and the thing I invoked in the previous
disagreement) - and remote attestation.  You do not need to do remote
attestation to take advantage of the TPM.  If it were my machine, I would
never do remote attestation.  With that one choice, I get to reap the
personal advantages of the TPM while disabling its behaviors that you find
objectionable (serving the outside master).

Of course, you're throwing out a significant baby with that bath water.
What if it's your secrets you want protected on my machine?  It doesn't have
to be the RIAA's secrets.  Do you object just as much when it's your
secrets?

> 
> There is nothing wrong with the idea of a trusted kernel, but 
> "trusted"
> means that some entity is supposed to "trust" the kernel 
> (what else?). If
> two entities, who do not completely trust each other, are 
> supposed to both
> "trust" such a kernel, something very very fishy is going on.
> 
> 
> Can we do better?
> 
> More than ten years ago, Chaum and Pedersen presented a great 
> idea how to
> do such things without potentially compromising your 
> security. Bringing
> their ideas into the context of TCPA, things should look like in the
> following picture
> 
>+---+ +-+ +---+
>| Outside World | <-> | Your PC | <-> | TCPA-Observer |
>+---+ +-+ +---+
> 
> So you can trust &quo

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Pat Farrell
At 07:02 PM 12/15/2003 -0500, Jerrold Leichter wrote:
However, this advantage is there only because there are so few smart cards,
and so few smart card enabled applications, around.
A software only, networked smart card would solve the
chicken and egg problem. One such solution is
Tamper resistant method and apparatus, [Ellison], USPTO 6,073,237
(Do a patent number search at http://www.uspto.gov/patft/index.html)
Carl invented this as an alternative to Smartcards back in the SET
development days.
Pat

Pat Farrell [EMAIL PROTECTED]
http://www.pfarrell.com
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-15 Thread Jerrold Leichter
| > Which brings up the interesting question:  Just why are the reactions to
| > TCPA so strong?  Is it because MS - who no one wants to trust - is
| > involved?  Is it just the pervasiveness:  Not everyone has a smart card,
| > but if TCPA wins out, everyone will have this lump inside of their
| > machine.
|
| There are two differences between TCPA-hardware and a smart card.
|
| The first difference is obvious. You can plug in and later remove a smart
| card at your will, at the point of your choice. Thus, for homebanking with
| bank X, you may use a smart card, for homebaning with bank Y you
| disconnect the smart card for X and use another one, and before online
| gambling you make sure that none of your banking smart cards is connected
| to your PC. With TCPA, you have much less control over the kind of stuff
| you are using.
|
| This is quite an advantage of smart cards.
However, this advantage is there only because there are so few smart cards,
and so few smart card enabled applications, around.

Really secure mail *should* use its own smart card.  When I do banking, do
I have to remove my mail smart card?  Encryption of files on my PC should
be based on a smart card.  Do I have to pull that one out?  Does that mean
I can't look at my own records while I'm talking to my bank?  If I can only
have one smart card in my PC at a time, does that mean I can *never* cut and
paste between my own records and my on-line bank statement?  To access my
files and my employer's email system, do I have to have to trust a single
smart card to hold both sets of secrets?

I just don't see this whole direction of evolution as being viable.  Oh,
we'll pass through that stage - and we'll see products that let you connect
multiple smart cards at once, each "guaranteed secure" from the others.  But
that kind of add-on is unlikely to really *be* secure.

Ultimately, to be useful a trusted kernel has to be multi-purpose, for exactly
the same reason we want a general-purpose PC, not a whole bunch of fixed-
function appliances.  Whether this multi-purpose kernel will be inside the PC,
or a separate unit I can unplug and take with me, is a separate issue. Give
the current model for PC's, a separate "key" is probably a better approach.
However, there are already experiments with "PC in my pocket" designs:  A
small box with the CPU, memory, and disk, which can be connect to a small
screen to replace a palmtop, or into a unit with a big screen, a keyboard,
etc., to become my desktop.  Since that small box would have all my data, it
might make sense for it to have the trusted kernel.  (Of course, I probably
want *some* part to be separate to render the box useless is stolen.)

| The second point is perhaps less obvious, but may be more important.
| Usually, *your* PC hard- and software is supposed to to protect *your*
| assets and satisfy *your* security requirements. The "trusted" hardware
| add-on in TCPA is supposed to protect an *outsider's* assets and satisfy
| the *outsider's* security needs -- from you.
|
| A TCPA-"enhanced" PC is thus the servant of two masters -- your servant
| and the outsider's. Since your hardware connects to the outsider directly,
| you can never be sure whether it works *against* you by giving the
| outsider more information about you than it should (from your point if
| view).
|
| There is nothing wrong with the idea of a trusted kernel, but "trusted"
| means that some entity is supposed to "trust" the kernel (what else?). If
| two entities, who do not completely trust each other, are supposed to both
| "trust" such a kernel, something very very fishy is going on.
Why?  If I'm going to use a time-shared machine, I have to trust that the
OS will keep me protected from other users of the machine.  All the other
users have the same demands.  The owner of the machine has similar demands.

The same goes for any shared resource.  A trusted kernel should provide some
isolation guarantees among "contexts".  These guarantees should be independent
of the detailed nature of the contexts.  I think we understand pretty well
what the *form* of these guarantees should be.  We do have problems actually
implementing such guarantees in a trustworthy fashion, however.

Part of the issue with TCPA is that the providers of the kernel that we are
all supposed to trust blindly are also going to be among those who will use it
heavily.  Given who those producers are, that level of trust is unjustifiable.

However, suppose that TCPA (or something like it) were implemented entirely by
independent third parties, using open techniques, and that they managed to
produce both a set of definitions of isolation, and an implementation, that
were widely seen to correctly specify, embody, and enforce strict protection.
How many of the criticisms of TCPA would that mute?  Some:  Given open
standards, a Linux TCPA-based computing platform could be produced.
Microsoft's access the the trusted kernel would be exactly the same as
everyone else

Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-15 Thread Stefan Lucks
On Sun, 14 Dec 2003, Jerrold Leichter wrote:

> Which brings up the interesting question:  Just why are the reactions to
> TCPA so strong?  Is it because MS - who no one wants to trust - is
> involved?  Is it just the pervasiveness:  Not everyone has a smart card,
> but if TCPA wins out, everyone will have this lump inside of their
> machine.

There are two differences between TCPA-hardware and a smart card.

The first difference is obvious. You can plug in and later remove a smart
card at your will, at the point of your choice. Thus, for homebanking with
bank X, you may use a smart card, for homebaning with bank Y you
disconnect the smart card for X and use another one, and before online
gambling you make sure that none of your banking smart cards is connected
to your PC. With TCPA, you have much less control over the kind of stuff
you are using.

This is quite an advantage of smart cards.

The second point is perhaps less obvious, but may be more important.
Usually, *your* PC hard- and software is supposed to to protect *your*
assets and satisfy *your* security requirements. The "trusted" hardware
add-on in TCPA is supposed to protect an *outsider's* assets and satisfy
the *outsider's* security needs -- from you.

A TCPA-"enhanced" PC is thus the servant of two masters -- your servant
and the outsider's. Since your hardware connects to the outsider directly,
you can never be sure whether it works *against* you by giving the
outsider more information about you than it should (from your point if
view).

There is nothing wrong with the idea of a trusted kernel, but "trusted"
means that some entity is supposed to "trust" the kernel (what else?). If
two entities, who do not completely trust each other, are supposed to both
"trust" such a kernel, something very very fishy is going on.


Can we do better?

More than ten years ago, Chaum and Pedersen presented a great idea how to
do such things without potentially compromising your security. Bringing
their ideas into the context of TCPA, things should look like in the
following picture

   +---+ +-+ +---+
   | Outside World | <-> | Your PC | <-> | TCPA-Observer |
   +---+ +-+ +---+

So you can trust "your PC" (possibly with a trusted kernel ... trusted by
you). And an outsider can trust the observer.

The point is, the outside world does not directly talk to the observer!

Chaum and Pedersen (and some more recent authors) defined protocols to
satisfy the outsider's security needs without giving the outsider any
chance to learn more about you and the data stored in your PC than you
want her to learn.

TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and is
thus open to abuse.

Reference:

  D. Chaum and T. Pedersen. Wallet databases with observers.
  In Crypto '92, LNCS 740, pp. 89-105.



-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
e-mail: [EMAIL PROTECTED]
home: http://th.informatik.uni-mannheim.de/people/lucks/
--  I  love  the  smell  of  Cryptanalysis  in  the  morning!  --

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]