Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-04 Thread Jerrold Leichter
| David Wagner writes:
|
|  To see why, let's go back to the beginning, and look at the threat
|  model.  If multiple people are doing shared development on a central
|  machine, that machine must have an owner -- let's call him Linus.  Now
|  ask yourself: Do those developers trust Linus?
| 
|  If the developers don't trust Linus, they're screwed.  It doesn't how
|  much attestation you throw at the problem, Linus can always violate their
|  security model.  As always, you've got to trust root (the system
|  administrator); nothing new here.
| 
|  Consequently, it seems to me we only need to consider a threat model
|  where the developers trust Linus.  (Linus need not be infallible, but the
|  developers should believe Linus won't intentionally try to violate their
|  security goals.)  In this case, owner-directed attestation suffices.
|  Do you see why?  Linus's machine will produce an attestation, signed
|  by Linus's key, of what software is running.  Since the developers trust
|  Linus, they can then verify this attestation.  Note that the developers
|  don't need to trust each other, but they do need to trust the owner/admin
|  of the shared box.  So, it seems to me we can get by without third-party
|  attestation.
|
| You could conceivably have a PC where the developers don't trust
| Linus, but instead trust the PC manufacturer.  The PC manufacturer
| could have made it extremely expensive for Linus to tamper with the PC
| in order to violate [the developers'] security model.  (It isn't
| logically impossible, it's just extremely expensive.  Perhaps it costs
| millions of dollars, or something.)
Precisely - though see below.

| There are computers like that today.  At least, there are devices that can
| run software, that are highly tamper-resistant, and that can do attestations.
Smart cards are intended to work this way, too.

| (Now there is an important question about what the cost to do a hardware
| attack against those devices would be.)  It seems to me to be a good thing
| that the ordinary PC is not such a device.  (Ryan Lackey, in a talk
| about security for colocated machines, described using devices like
| these for colocation where it's not appropriate or desirable to rely on
| the physical security of the colocated machine.  Of course, strictly
| speaking, all security always relies on physical security.)
This kind of thing goes *way* back.  In the '70's, there was a company - I
think the name was BASIC 4 - that sold a machine with two privileged levels.
The OS ran at level 1 (user code at unprivileged level 2, of course).  There
were some things - like, probably, accounting - that ran at level 0.  Even
with physical access to the machine, it was supposed to be difficult to do
anything to level 0 - unless you had a (physical) key to use in the lock
on the front panel.  The machine was intended as a replacement for the then-
prevalent time-sharing model:  An application developer would buy machines
from the manufacturer, load them with application environments, and sell
application services.  Users of the machines could use the applications with
fast local acceess, even do development - but could not modify the basic
configuration.  I know the company vanished well before networks got fast
enough, and PC's cheap enough, the the business model stopped making any
sense; but I know nothing of the details.

| I don't know how the key management works in these devices.  If the
| keys used to sign attestations are loaded by (or known to) the device
| owner, it wouldn't help with the case where the device owner is
| untrusted.  If the keys are loaded by the manufacturer, it might
| support a model where the owner is untrusted and the manufacturer is
| trusted.
There's no more reason that the manufacturer has to be trusted than that the
manufacturer of a safe has to be trusted (at least in the sense that neither
needs to know the keys/combination on any particular machine/safe).  If
machines like this are to be built, they should require some special physical
override to allow the keys to be configured.  A key lock is still good
technology for this purpose:  It's a very well-understood technology, and its
simplicity is a big advantage.  A combination lock might be easier to
integrate securely, for the same basic reason that combination locks became
the standard for bank vaults:  No need for an open passageway from the outside
to the inside.  (In the bank vault case, this passageway was a great way to
get nitroglycerin inside the locking mechanism.)  In either case, you could
(like a bank) use a form of secret sharing, so that only a trusted group of
people - with multiple keys, or multiple parts of the combination - could
access the key setup mode.  Given this, there is no reason why a machine fresh
from the manufacturer need have any embedded keys.

Will machines like this be built?  Probably not, except for special purposes.
The TCPA machines will likely require you (and the people who want to 

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2004-01-03 Thread David Wagner
Jerrold Leichter  wrote:
All of this is fine as long as there is a one-to-one association between
machines and owners of those machines.  Consider the example I gave
earlier:  A shared machine containing the standard distribution of the
trusted computing software.  All the members of the group that maintain the
software will want to have the machine attest, to them, that it is properly
configured and operating as intended.

I think you may be giving up too quickly.  It looks to me like
this situation can be handled by owner-directed attestation (e.g.,
Owner Override, or Owner Gets Key).  Do you agree?

To see why, let's go back to the beginning, and look at the threat
model.  If multiple people are doing shared development on a central
machine, that machine must have an owner -- let's call him Linus.  Now
ask yourself: Do those developers trust Linus?

If the developers don't trust Linus, they're screwed.  It doesn't how
much attestation you throw at the problem, Linus can always violate their
security model.  As always, you've got to trust root (the system
administrator); nothing new here.

Consequently, it seems to me we only need to consider a threat model
where the developers trust Linus.  (Linus need not be infallible, but the
developers should believe Linus won't intentionally try to violate their
security goals.)  In this case, owner-directed attestation suffices.
Do you see why?  Linus's machine will produce an attestation, signed
by Linus's key, of what software is running.  Since the developers trust
Linus, they can then verify this attestation.  Note that the developers
don't need to trust each other, but they do need to trust the owner/admin
of the shared box.  So, it seems to me we can get by without third-party
attestation.

This scenario sounds pretty typical to me.  Most machines have a single
owner.  Most machines have a system administrator (who must be trusted).
I don't think I'm making unrealistic assumptions.

You're trying to make the argument that feature X (here, remote attestation for
multiple mutually-suspicious parties) has no significant uses.  Historically,
arguments like this are losers.

Yes, this is a fair point.  I suppose I would say I'm arguing that
feature X (third-party attestation) seems to have few significant uses.
It has some uses, but it looks like they are in the minority; for the
most part, it seems that feature X is unnecessary.  At the same time,
many people are worried that feature X comes with significant costs.

At least, this is how it looks to me.  Maybe I've got something wrong.
If these two points are both accurate, this is an interesting observation.
If they're inaccurate, I'd be very interested to hear where they fail.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-31 Thread Seth David Schoen
David Wagner writes:

 So it seems that third-party-directed remote attestation is really where
 the controversy is.  Owner-directed remote attestation doesn't have these
 policy tradeoffs.

 Finally, I'll come back to the topic you raised by noting that your
 example application is one that could be supported with owner-directed
 remote attestation.  You don't need third-party-directed remote
 attestation to support your desired use of remote attestation.  So, TCPA
 or Palladium could easily fall back to only owner-directed attestation
 (not third-party-attestation), and you'd still be able to verify the
 software running on your own servers without incurring new risks of DRM,
 software lock-in, or whatever.
 
 I should mention that Seth Schoen's paper on Trusted Computing anticipates
 many of these points and is well worth reading.  His notion of owner
 override basically converts third-party-directed attestation into
 owner-directed attestation, and thereby avoids the policy risks that so
 many have brought up.  If you haven't already read his paper, I highly
 recommend it.  http://www.eff.org/Infra/trusted_computing/20031001_tc.php

Thanks for the kind words.

Nikita Borisov has proposed an alternative to Owner Override which
Ka-Ping Yee has called Owner Gets Key, and which is probably the
same as what you're discussing.

Most TC vendors have entered into this with some awareness of the
risks.  For example, the TCPA whitepaper that John Gilmore mentioned
here earlier specifically contemplates punishing people for using
disapproved software, without considering exactly why it is that
people would want to put themselves into a position where they could
be punished for doing that (given that they can't now!).  (In
deference to Unlimited Freedom's observations, it is not logically
impossible that people would ever want to put themselves into that
position; the TCPA whitepaper just didn't consider why they would.)

As a result, I have not had any TC vendor express much interest in
Owner Override or Owner Gets Key.  Some of them correctly pointed out
that there are interesting user interface problems associated with
making this usable yet resistant to social engineering attacks.  There
might be paternalistic reasons for not wanting to give end-users the
attestation keys, if you simply don't trust that they will use them
safely.  (But there's probably no technical way to have our cake and
eat it too: if you want to do paternalistic security, you can probably
then abuse it; if you want to give the owner total control, you can't
prevent the owner from falling victim to social engineering.)  Still,
the lack of a totally obvious secure UI hasn't stopped research from
taking place in related areas.  For example, Microsoft is reportedly
still trying to figure out how to make clear to people whether the
source of a particular UI element is the program they think it is, and
how to handle the installation of NGSCB trusted computing agents.
Secure UI is full of thorny problems.

I've recently been concerned about one problem with the Owner Override
or Owner Gets Key approaches.  This is the question of whether they
are particularly vulnerable to a man-in-the-middle attack.

Suppose that I own a computer with the TCG TPM FOO and you are a
server operator, and you and I trust each other and believe that we
have aligned interests.  (One example is the case where you are a bank
and we both want to be sure that I am using a pristine, unaltered
computing environment in order to access my account.  Neither of us
will benefit if I can be tricked into making bogus transactions.)

An attacker Mallory owns a computer with the TCG TPM BAR.  We assume
that Mallory has already compromised my computer (because our ability
to detect when Mallory does that is the whole reason we're using
attestation in the first place).  Mallory replaces my web browser (or
financial software) with a web browser that he has modified to send
queries to him instead of to you, and to contain a root CA certificate
that makes it trust a root CA that Mallory controls.  (Alternatively,
he's just made the new web browser ignore the results of SSL
certificate validation entirely, though that might be easier to detect.)

Now when I go to your web site, my connection is redirected to Mallory's
computer, which proxies it and initiates a connection to you.  You ask
for an attestation as a condition of accessing your service.  Since I
have no particular reason to lie to you (I believe that your reason
for requesting the attestation is aligned with my interest), I direct
my computer to give you an attestation reflecting the actual state of
FOO's PCR values.  This attestation is generated and reflects a
signature by foo on a set of PCR values that show the results of
Mallory's tampering.  But Mallory does _not_ pass this attestation
along to you.  Instead, Mallory uses Owner Override or Owner Gets Key
to generate a new attestation reflecting the original set of PCR

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-30 Thread Jerrold Leichter
| Rick Wash  wrote:
| There are many legitimate uses of remote attestation that I would like to
| see.  For example, as a sysadmin, I'd love to be able to verify that my
| servers are running the appropriate software before I trust them to access
| my files for me.  Remote attestation is a good technical way of doing that.
|
| This is a good example, because it brings out that there are really
| two different variants of remote attestation.  Up to now, I've been
| lumping them together, but I shouldn't have been.  In particular, I'm
| thinking of owner-directed remote attestation vs. third-party-directed
| remote attestation.  The difference is who wants to receive assurance of
| what software is running on a computer; the former mechanism allows to
| convince the owner of that computer, while the latter mechanism allows
| to convince third parties
|
| Finally, I'll come back to the topic you raised by noting that your
| example application is one that could be supported with owner-directed
| remote attestation.  You don't need third-party-directed remote
| attestation to support your desired use of remote attestation.  So, TCPA
| or Palladium could easily fall back to only owner-directed attestation
| (not third-party-attestation), and you'd still be able to verify the
| software running on your own servers without incurring new risks of DRM,
| software lock-in, or whatever
All of this is fine as long as there is a one-to-one association between
machines and owners of those machines.  Consider the example I gave
earlier:  A shared machine containing the standard distribution of the
trusted computing software.  All the members of the group that maintain the
software will want to have the machine attest, to them, that it is properly
configured and operating as intended.  We can call the group the owner of the
machine, and create a single key pair that all of them know.  But this is
brittle - shared secrets always are.  Any member of the group could then
modify the machine and, using his access to the private key, fake the all
clear indication.  Each participant should have his own key pair, since
attestation using a particular key pair only indicates security with respect
to those who don't know the private key of the pair - and a member of a
development team for the secure kernel *should* mistrust his fellow team
members!

So, again, there are simple instances where it will prove useful to be able
to maintain multiple sets of independent key pairs.

Now, in the shared distribution machine case, on one level team members should
be mutually suspicious, but on another they *do* consider themselves joint
owners of the machine - so it doesn't bother them that there are key pairs
to which they don't have access.  After all, those key pairs are assigned to
*other* owners of the machine!  But exactly the same mechanism could be used
to assign a key pair to Virgin Records - who we *don't* want to consider an
owner of the machine.

As long as, by owner, you mean a single person, or a group of people who
completely trust each other (with respect to the security problem we are trying
to solve); and as long as each machine only has only one owner; then, yes, one
key pair will do.  But as soon as owner can encompass mutually suspicious
parties, you need to have mutual independent key pairs - and then how you
use them, and to whom you grant them, becomes a matter of choice and policy,
not technical possibility.

BTW, even with a single owner, multiple independent key pairs may be useful.
Suppose I have reason to suspect that my private key has been leaked.  What
can I do?  If there is only one key pair around, I have to rebuild my machine
from scratch.  But if I had the forsight to generate *two* key pairs, one of
which I use regularly - and the other of which I sealed away in a safe - then
I can go to the safe, get out my backup key pair, and re-certify my machine.
In fact, it would probably be prudent for me to generate a whole bunch of
such backup key pairs, just in case.

You're trying to make the argument that feature X (here, remote attestation for
multiple mutually-suspicious parties) has no significant uses.  Historically,
arguments like this are losers.  People come up with uses for all kinds of
surprising things.  In this case, it's not even very hard.

An argument that feature X has uses, but also imposes significant and non-
obvious costs, is another thing entirely.  Elucidating the costs is valuable.
But ultimately individuals will make their own analysis of the cost/benefit
ratio, and their calculations will be different from yours.  Carl Ellison, I
think, argued that TCPA will probably never have large penetration because the
dominant purchasing factor for consumers is always initial cost, and the
extra hardware will ensure that TCPA-capable machines will always be more
expensive.  Maybe he's right.

Even if he isn't, as long as people believe that they have control over the
costs associated with 

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-29 Thread bear


On Tue, 23 Dec 2003, Seth David Schoen wrote:

When attestation is used, it likely will be passed in a service like
HTTP, but in a documented way (for example, using a protocol based on
XML-RPC).  There isn't really any security benefit obtained by hiding
the content of the attestation _from the party providing it_!

It's not the parties who are interested in security alone we're worried
about.  There is an advantage in profiling and market research, so I
expect anyone able to effectively subvert the protocols to attempt
to hide the content of remote attestataion.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Jerrold Leichter wrote:
| *Any* secure computing kernel that can do
| the kinds of things we want out of secure computing kernels, can also
| do the kinds of things we *don't* want out of secure computing kernels.

David Wagner wrote:
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.

Jerrold Leichter wrote:
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

Good.  I'm glad we agree that one can build a remote kernel without
remote attestation; that's progress.  But I dispute your claim that remote
attestation is critical to securing our machines.  As far as I can see,
remote attestation seems (with some narrow exceptions) pretty close to
worthless for the most common security problems that we face today.

Your argument is premised on the assumption that it is critical to defend
against attacks where an adversary physically tampers with your machine.
But that premise is wrong.

Quick quiz: What's the dominant threat to the security of our computers?
It's not attacks on the hardware, that's for sure!  Hardware attacks
aren't even in the top ten.  Rather, our main problems are with insecure
software: buffer overruns, configuration errors, you name it.

When's the last time someone mounted a black bag operation against
your computer?  Now, when's the last time a worm attacked your computer?
You got it-- physical attacks are a pretty minimal threat for most users.

So, if software insecurity is the primary problem facing us, how does
remote attestation help with software insecurity?  Answer: It doesn't, not
that I can see, not one bit.  Sure, maybe you can check what software is
running on your computer, but that doesn't tell you whether the software
is any good.  You can check whether you're getting what you asked for,
but you have no way to tell whether what you asked for is any good.

Let me put it another way.  Take a buggy, insecure application, riddled
with buffer overrun vulnerabilities, and add remote attestation.  What do
you get?  Answer: A buggy, insecure application, riddled with buffer
overrun vulnerabilities.  In other words, remote attestation doesn't
help if your trusted software is untrustworthy -- and that's precisely
the situation we're in today.  Remote attestation just doesn't help with
the dominant threat facing us right now.

For the typical computer user, the problems that remote attestation solves
are in the noise compared to the real problems of computer security
(e.g., remotely exploitable buffer overruns in applications).  Now,
sure, remote attestation is extremely valuable for a few applications,
such as digital rights management.  But for typical users?  For most
computer users, rather than providing an order of magnitude improvement
in security, it seems to me that remote attestation will be an epsilon
improvement, at best.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-28 Thread Seth David Schoen
Antonomasia writes:

 From: Carl Ellison [EMAIL PROTECTED]
 
  Some TPM-machines will be owned by people who decide to do what I
  suggested: install a personal firewall that prevents remote attestation.
 
 How confident are you this will be possible ?  Why do you think the
 remote attestation traffic won't be passed in a widespread service
 like HTTP - or even be steganographic ?

The main answer is that the TPM will let you disable attestation, so
you don't even have to use a firewall (although if you have a LAN, you
could have a border firewall that prevented anybody on the LAN from
using attestation within protocols that the firewall was sufficiently
familiar with).

When attestation is used, it likely will be passed in a service like
HTTP, but in a documented way (for example, using a protocol based on
XML-RPC).  There isn't really any security benefit obtained by hiding
the content of the attestation _from the party providing it_!

Certainly attestation can be used as part of a key exchange so that
subsequent communications between local software and a third party are
hidden from the computer owner, but because the attestation must
happen before that key exchange is concluded, you can still examine
and destroy the attestation fields.

One problem is that a client could use HTTPS to establish a session
key for a session within which an attestation would be presented.
That might disable your ability to use the border firewall to block
the attestation, but you can still disable it in the TPM on that
machine if you control the machine.

The steganographic thing is implausible because the TPM is a passive
device which can't control other components in order to get them to
signal information.

-- 
Seth David Schoen [EMAIL PROTECTED] | Very frankly, I am opposed to people
 http://www.loyalty.org/~schoen/   | being programmed by others.
 http://vitanuova.loyalty.org/ | -- Fred Rogers (1928-2003),
   |464 U.S. 417, 445 (1984)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-28 Thread William Arbaugh


I must confess I'm puzzled why you consider strong authentication
the same as remote attestation for the purposes of this analysis.
It seems to me that your note already identifies one key difference:
remote attestation allows the remote computer to determine if they wish
to speak with my machine based on the software running on my machine,
while strong authentication does not allow this.
That is the difference, but my point is that the result with respect to 
the control of your computer is the same. The distant end either 
communicates with you or it doesn't. In authentication, the distant end 
uses your identity to make that decision. In remote attestation, the 
distant end uses your computer's configuration (the computer's identity 
to some degree) to make that same decision.

As a result, remote attestation enables some applications that strong
authentication does not.  For instance, remote attestation enables DRM,
software lock-in, and so on; strong authentication does not.  If you
believe that DRM, software lock-in, and similar effects are 
undesirable,
then the differences between remote attestation and strong 
authentication
are probably going to be important to you.

So it seems to me that the difference between authenticating software
configurations vs. authenticating identity is substantial; it affects 
the
potential impact of the technology.  Do you agree?  Did I miss 
something?
Did I mis-interpret your remarks?

My statement was that the two are similar to the degree to which the 
distant end has control over your computer. The difference is that in 
remote attestation we are authenticating a system and we have some 
assurance that the system won't deviate from its programming/policy (of 
course all of the code used in these applications will be formally 
verified :-)). In user authentication, we're authenticating a human and 
we have significantly less assurance that the authenticated subject in 
this case (the human) will follow policy. That is why remote 
attestation and authentication produce different side effects enabling 
different applications: the underlying nature of the authenticated 
subject. Not because of a difference in the technology.



P.S. As a second-order effect, there seems to be an additional 
difference
between remote attestation (authentication of configurations) and
strong authentication (authentication of identity).  Remote 
attestation
provides the ability for negative attestation of a configuration:
for instance, imagine a server which verifies not only that I do have
RealAudio software installed, but also that I do not have any Microsoft
Audio software installed.  In contrast, strong authentication does
not allow negative attestation of identity: nothing prevents me from
sharing my crypto keys with my best friend, for instance.

Well- biometrics raises some interesting Gattica issues.  But, I'm not 
going to go there on the list. It is a discussion that is better done 
over a few pints.

So to summarize- I was focusing only on the control issue and noting 
that even though the two technologies enable different applications 
(due to the assurance that we have in how the authenticated subject 
will behave), they are very similar in nature.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-26 Thread Rick Wash
On Sun, Dec 21, 2003 at 08:55:16PM -0800, Carl Ellison wrote:

   IBM has started rolling out machines that have a TPM installed. 
 [snip ...]
 Then again, TPMs cost money and I don't know any private individuals who are
 willing to pay extra for a machine with one.  Given that, it is unlikely
 that TPMs will actually become a popular feature.

Personally, I own a laptop (T30) with the TPM chip, and I paid extra for the
chip, but that is because I am a researcher interested in seeing what I can
get the chip to do.

I think that it is possible that they will sell a lot of TPM chips.  IBM is
currently calling it the IBM Security Subsystem 2.0 or something like
that, which sounds a lot less threatening and more useful than trusted
platform module.  It depends a lot on the marketing strategy.  If they can
make it sound useful, that will take them far.
 
   Some TPM-machines will be owned by people who decide to do what I
 suggested: install a personal firewall that prevents remote attestation.
 With wider dissemination of your reasoning, that number might be higher than
 it would be otherwise.

Agreed.  The first thing I did when writing code was to figure out how to
turn it off.  THen I figured out how to enable most of the functionality
while disabling the built-in attestation key.
 
   Meanwhile, there will be hackers who accept the challenge of
 defeating the TPM.  There will be TPM private keys loose in the world,
 operated by software that has no intention of telling the truth to remote
 challengers.  

And this will be simplier than most people think.  From what I understand
about the current TPM designs, the TPM chip is NOT designed to be
tamper-resistant.  The IBM researchers told me that it is possible to read
the secrets from the TPM chip with a standard bus reader.  I've been meaning
to wander over to the Computer Engineering department and borrow one of
those to verify this claim.

Based on this, it shouldn't be hard for a set of people to extract their 
keys from their TPM chips and spread them around the internet, emulating a
real TPM.  This I see as a major stumbling block for DRM systems based on
TCPA.  TCPA works very well against purely-software threats, but as far as
protecting against computer owners and determined attackers, I'm not so
sure.

   At this point, a design decision by the TCPA (TCG) folks comes into
 play.  There are ways to design remote attestation that preserve privacy and
 there are ways that allow linkage of transactions by the same TPM.  

   Either of these outcomes will kill the TCG, IMHO.

I agree.  This is why to make the TPM a success, specifically for something
like DRM, the companies advocating it will have to convince the users that
it is a good thing.  This is the same problem they have now.  They have to
make the users *want* to use the trusted DRM features and *not* want to
subvert them.   They can do this by making the DRM features mostly unseen
and providing cheap and effective ways for people to get the media that they
want in the formats that they want.  If they try to fight their own users,
there will be enough ways of getting around TCPA for the users to fight
back.
 
   You postulated that someday, when the TPM is ubiquitous, some
 content providers will demand remote attestation.  I claim it will never
 become ubiquitous, because of people making my choice - and because it takes
 a long time to replace the installed base - and because the economic model
 for TPM deployment is seriously flawed.  

Well, there are a couple things that could change this.  If other, non-DRM
uses of the TPM chip become popular (say for example that everyone wants to
use it to encrypt their hard drive), then that could speed deployment of the
chip, since that functionality is also bundled with the remote attestation
functionality.  I know that then creates a market for a chip that does what
is needed without the remote attestation functionality, but it then becomes
business, not technology, that determines which people buy.

 If various service or content providers elect not to allow me service
 unless I do remote attestation, I then have 2 choices: use the friendly
 web service that will lie for me - or decline the content or service.

Correct.  However, this is where copyright and other government-granted
monopolies come into play.  If I want a specific piece of copyrighted
material (say, a song), I have to either deal with the copyright owner
(RIAA) on their terms (remote attestation), not get the song, or break the
law.  None of those three alternatives sound very good.   The best chance is
education of the masses, so everyone chooses one of the latter two and makes
it economically infeasible for the RIAA to maintain their draconian terms.
Then we have a useful piece of hardware in our computers (TCPA), subsidised
largely by people like the RIAA, but who can't use it for economic reasons.
That would be the ideal outcome.

There are many 

Re: example: secure computing kernel needed

2003-12-26 Thread Seth David Schoen
William Arbaugh writes:

 If that is the case, then strong authentication provides the same 
 degree of control over your computer. With remote attestation, the 
 distant end determines if they wish to communicate with you based on 
 the fingerprint of your configuration. With strong authentication, the 
 distant end determines if they wish to communicate with you based on 
 your identity.

I'm a little confused about why you consider these similar.  They seem
very different to me, particularly in the context of mass-market
transactions, where a service provider is likely to want to deal with
the general public.

While it's true that service providers could try to use some demand
some sort of PKI credential as a way of getting the true name of those
they deal with, the particular things they can do with a true name are
much more limited than the things they could do with proof of
someone's software configuration.  Also, in the future, the cost of
demanding a true name could be much higher than the cost of demanding
a proof of software identity.

To give a trivial example, I've signed this paragraph using a PGP
clear signature made by my key 0167ca38.  You'll note that the Version
header claims to be PGP 17.0, but in fact I don't have a copy of PGP
17.0.  I simply modified that header with my text editor.  You can tell
that this paragraph was written by me, but not what software I used to
write it.

As a result, you can't usefully expect to take any action based on my
choice of software -- but you can take some action based on whether
you trust me (or the key 0167ca38).  You can adopt a policy that you
will only read signed mail -- or only mail signed by a key that Phil
Zimmermann has signed, or a key that Bruce Lehman has signed -- but
you can't adopt a policy that you will only read mail written by mutt
users.  In the present environment, it's somewhat difficult to use
technical means to increase or diminish others' incentive to use
particular software (at least if there are programmers actively
working to preserve interoperability).

Sure, attestation for platform identity and integrity has some things
in common with authentication of human identity.  (They both use
public-key cryptography, they can both use a PKI, they both attempt to
prove things to a challenger based on establishing that some entity
has access to a relevant secret key.)  But it also has important
differences.  One of those differences has to do with whether trust is
reposed in people or in devices!  I think your suggestion is tantamount
to saying that an electrocardiogram and a seismograph have the same
medical utility because they are both devices for measuring and
recording waveforms.

 I just don't see remote attestation as providing control over your 
 computer provided the user/owner has control over when and if remote 
 attestation is used. Further, I can think of several instances where 
 remote attestation is a good thing. For example, a privacy P2P file 
 sharing network. You wouldn't want to share your files with an RIAA 
 modified version of the program that's designed to break the anonymity 
 of the network.

This application is described in some detail at

http://www.eecs.harvard.edu/~stuart/papers/eis03.pdf

I haven't seen a more detailed analysis of how attestation would
benefit particular designs for anonymous communication networks
against particular attacks.  But it's definitely true that there are
some applications of attestation to third parties that many computer
owners might want.  (The two that first come to mind are distributed
computing projects like [EMAIL PROTECTED] and network games like Quake,
although I have a certain caution about the latter which I will
describe when the video game software interoperability litigation I'm
working on is over.)

It's interesting to note that in this case you benefit because you
received an attestation, not because you gave one (although the
network is so structured that giving an attestation is arranged to be
the price of receiving one: Give me your name, horse-master, and I
shall give you mine!).

The other thing that end-users might like is if _non-peer-to-peer_
services they interacted with could prove properties about themselves
-- that is, end-users might like to receive rather than to give
attestations.  An anonymous remailer could give an attestation to
prove that it is really running the official Mixmaster and the
official Exim and not a modified Mixmaster or modified Exim that
try to break anonymity.  Apple could give an attestation proving that
it didn't have the ability to alter or to access the contents of
your data while it was stored by its Internet hard drive service.

One interesting question is how to characterize on-line services where
users would be asked for attestation (typically to their detriment, by
way of taking away their choice of software) as opposed to on-line
services where users would be able to ask for attestation (typically
to their 

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-23 Thread Anne Lynn Wheeler
At 03:03 PM 12/21/2003 -0800, Seth David Schoen wrote:
Some people may have read things like this and mistakenly thought that
this would not be an opt-in process.  (There is some language about
how the user's platform takes various actions and then responds to
challenges, and perhaps people reasoned that it was responding
autonomously, rather than under its user's direction.)
my analogy ... at least in online scenario has been to wild, wild west 
before there were traffic conventions, traffic signs, lane markers, traffic 
lights, standards for vehicles ... misc. traffic rules about operating an 
unsafe vehicle and driving recklessly, various minimums about traffic 
regulations, and things like insurance requirements to cover the cost of 
accidents. infected machines that do distributed DOS  attacks ... might be 
considered analogous to large overloaded trucks w/o operational breaks 
(given rise to truck inspection and weighing stations).  many ISPs are 
already monitoring, accounting and controlling various kinds of activity 
with respect to amount of traffic, simultaneous log-ins, etc.  If there are 
sufficient online incidents ... then there could be very easy to declare 
machines that become infected and are used as part of various unacceptable 
behavior to have then declared unsafe vehicles and some sort of insurace be 
required to cover the costs of associated with unsafe and reckless driving 
on the internet. Direct costs to individuals may go up ... but the unsafe 
and reckless activities currently going on represent enormous 
infrastructure costs.  Somewhat analogy to higher insurance premiums for 
less safe vehicles, government minimums for crash tests, bumper 
conventions, seat belts, air bags, etc.

part of the issue is that some number of the platforms never had original 
design point of significant interaction on a totally open and free internet 
(long ago and far away, vehicles that didn't have bumpers, crash tests, 
seat belts, air bags, safety glass, etc). Earlier in the original version 
of this thread ... I made reference to some number of systems from 30 or 
more years ago ... that were designed to handle such environments  and 
had basic security designed in from the start ... were found to be not 
subject to majority of the things that are happening to lots of the current 
internet connected platforms.
http://www.garlic.com/~lynn/aadsm16.htm#8 example: secure computing kernel 
needed

misc. past analogies to unsafe and reckless driving on the internet:
http://www.garlic.com/~lynn/aadsm14.htm#14 blackhole spam = mail 
unreliability (Re: A Trial Balloon to Ban Email?)
http://www.garlic.com/~lynn/aadsm14.htm#15 blackhole spam = mail 
unreliability (Re: A Trial Balloon to Ban Email?)
http://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#28 Internet like city w/o traffic 
rules, traffic signs, traffic lights  and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#29 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#30 Internet like city w/o traffic 
rules, traffic signs, traffic lights and traffic enforcement
http://www.garlic.com/~lynn/2001m.html#31 Internet like city w/o traffic 
rules, traffic signs, traffic lights   and traffic enforcement
http://www.garlic.com/~lynn/2002p.html#27 Secure you PC or get kicked off 
the net?
http://www.garlic.com/~lynn/2003i.html#17 Spam Bomb
http://www.garlic.com/~lynn/2003m.html#21 Drivers License required for surfing?

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-23 Thread David Wagner
William Arbaugh  wrote:
David Wagner writes:
 As for remote attestion, it's true that it does not directly let a remote
 party control your computer.  I never claimed that.  Rather, it enables
 remote parties to exert control over your computer in a way that is
 not possible without remote attestation.  The mechanism is different,
 but the end result is similar.

If that is the case, then strong authentication provides the same 
degree of control over your computer. With remote attestation, the 
distant end determines if they wish to communicate with you based on 
the fingerprint of your configuration. With strong authentication, the 
distant end determines if they wish to communicate with you based on 
your identity.

I must confess I'm puzzled why you consider strong authentication
the same as remote attestation for the purposes of this analysis.

It seems to me that your note already identifies one key difference:
remote attestation allows the remote computer to determine if they wish
to speak with my machine based on the software running on my machine,
while strong authentication does not allow this.

As a result, remote attestation enables some applications that strong
authentication does not.  For instance, remote attestation enables DRM,
software lock-in, and so on; strong authentication does not.  If you
believe that DRM, software lock-in, and similar effects are undesirable,
then the differences between remote attestation and strong authentication
are probably going to be important to you.

So it seems to me that the difference between authenticating software
configurations vs. authenticating identity is substantial; it affects the
potential impact of the technology.  Do you agree?  Did I miss something?
Did I mis-interpret your remarks?



P.S. As a second-order effect, there seems to be an additional difference
between remote attestation (authentication of configurations) and
strong authentication (authentication of identity).  Remote attestation
provides the ability for negative attestation of a configuration:
for instance, imagine a server which verifies not only that I do have
RealAudio software installed, but also that I do not have any Microsoft
Audio software installed.  In contrast, strong authentication does
not allow negative attestation of identity: nothing prevents me from
sharing my crypto keys with my best friend, for instance.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-23 Thread Antonomasia
From: Carl Ellison [EMAIL PROTECTED]

   Some TPM-machines will be owned by people who decide to do what I
 suggested: install a personal firewall that prevents remote attestation.

How confident are you this will be possible ?  Why do you think the
remote attestation traffic won't be passed in a widespread service
like HTTP - or even be steganographic ?

-- 
##
# Antonomasia   ant notatla.org.uk   #
# See http://www.notatla.org.uk/ #
##

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-23 Thread Jerrold Leichter
|  We've met the enemy, and he is us.  *Any* secure computing kernel
|  that can do
|  the kinds of things we want out of secure computing kernels, can also
|  do the
|  kinds of things we *don't* want out of secure computing kernels.
| 
|  I don't understand why you say that.  You can build perfectly good
|  secure computing kernels that don't contain any support for remote
|  attribution.  It's all about who has control, isn't it?
| 
| There is no control of your system with remote attestation. Remote
| attestation simply allows the distant end of a communication to
| determine if your configuration is acceptable for them to communicate
| with you.
|
| But you missed my main point.  Leichter claims that any secure kernel is
| inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
| My main point is that this is simply not so.
|
| There are two very different pieces here: that of a secure kernel, and
| that of remote attestation.  They are separable.  TCPA and Palladium
| contain both pieces, but that's just an accident; one can easily imagine
| a Palladium-- that doesn't contain any support for remote attestation
| whatsoever.  Whatever you think of remote attestation, it is separable
| from the goal of a secure kernel.
|
| This means that we can have a secure kernel without all the harms.
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.  Leichter's claim
| is wrong
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

The issues have been discussed by others in this stream of messages, but
lets pull them together.  Suppose I wished to put together a secure system.
I choose my open-source software, perhaps relying on the word of others,
perhaps also checking it myself.  I choose a suitable hardware base.  I put
my system together, install my software - voila, a secure system.  At least,
it's secure at the moment in time.  How do I know, the next time I come to
use it, that it is *still* secure - that no one has slipped in and modified
the hardware, or found a bug and modified the software?

I can go for physical security.  I can keep the device with me all the time,
or lock it in a secure safe.  I can build it using tamper-resistant and
tamper-evident mechanisms.  If I go with the latter - *much* easier - I have
to actually check the thing before using it, or the tamper evidence does me
no good ... which acts as a lead-in to the more general issue.

Hardware protections are fine, and essential - but they can only go so far.
I really want a software self-check.  This is an idea that goes way back:
Just as the hardware needs to be both tamper-resistent and tamper-evident,
so for the software.  Secure design and implementation gives me tamper-
resistance.  The self-check gives me tamper evidence.  The system must be able
to prove to me that it is operating as it's supposed to.

OK, so how do I check the tamper-evidence?  For hardware, either I have to be
physically present - I can hold the box in my hand and see that no one has
broken the seals - or I need some kind of remote sensor.  The remote sensor
is a hazard:  Someone can attack *it*, at which point I lose my tamper-
evidence.

There's no way to directly check the software self-check features - I can't
directly see the contents of memory! - but I can arrange for a special highly-
secure path to the self-check code.  For a device I carry with me, this could
be as simple as a self-check passed LED controlled by dedicated hardware
accessible only to the self-check code.  But how about a device I may need
to access remotely?  It needs a kind of remote attestation - though a
strictly limited one, since it need only be able to attest proper operation
*to me*.  Still, you can see the slope we are on.

The slope gets steeper.  *Some* machines are going to be shared.  Somewhere
out there is the CVS repository containing the secure kernel's code.  That
machine is updated by multiple developers - and I certainly want *it* to be
running my security kernel!  The developers should check that the machine is
configured properly before trusting it, so it should be able to give a
trustworthy indication of its own trustworthiness to multiple developers.
This *could* be based on a single secret shared among the machine and all
the developers - but would you really want it to be?  Wouldn't it be better
if each developer shared a unique secret with the machine?

You can, indeed, stop anywhere along this slope.  You can decide you really
don't need remote attestation, even for yourself - you'll carry the machine
with you, or only use it when you are physically in front of it.  Or you
can 

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-22 Thread Ben Laurie
Carl Ellison wrote:
We see here a difference between your and my sides of the Atlantic.  Here in
the US, almost no one has a smart card.
Of those cards you carry, how many are capable of doing public key
operations?  A simple memory smartcard doesn't count for what we were
talking about.
I don't know. If you can tell me how to find out, I'd be happy to 
investigate. I have quite a few that are no longer needed, so 
destructive investigation is possible :-)

BTW, I forgot the two smartcards that are used by my Sky satellite TV stuff.

There are other problems with doing TCPA-like operations with a smartcard,
but I didn't go into those.  The biggest one to chew on is that I, the
computer owner, need verification that my software is in good shape.  My
agent in my computer (presumably the smartcard) needs a way to examine the
software state of my computer without relying on any of the software in my
computer (which might have been corrupted, if the computer's S/W has been
corrupted).  This implies to me that my agent chip needs a H/W path for
examining all the S/W of my computer.  That's something the TPM gives us
that a smartcard doesn't (when that smartcard goes through a normal device
driver to access its machine).
I'm not arguing with this - just the economic argument about number of 
smartcards.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-22 Thread Ed Reed
Remote attestation has use in applications requiring accountability of
the user, as a way for cooperating processes to satisfy themselves
that
configurations and state are as they're expected to be, and not
screwed
up somehow.
 
There are many business uses for such things, like checking to see
if locked down kiosk computers have been modified (either hardware
or software), verifying that users have not excercised their god-given
right to install spy-ware and viruses (since they're running with
administrative priviledges, aren't they?), and satisfying a consumer
that the server they're connected to is (or isn't) running software
that
records has adequate security domain protections to protect the users
data (perhaps backup files) the user entrusts to the server.
 
What I'm not sure of is whether there are any anonymous / privacy
enhancing scenarios in which remote attestation is useful.  Well, the
last case, above, where the server is attesting to the client could
work.
But what about the other way around.  The assumption I have is that
any remote attestation, even if anonymous, still will leave a trail
that might be used by forensic specialists for some form of traffic
analysis, if nothing else.
 
In that case, you'd need to trust your trusted computing system
not to provide remote attestation without your explicit assent.
 
I'd really like to see an open source effort to provide a high
assurance
TPM implementation, perhaps managed through a Linux 2.6 / LSM /
TPM driver talking to a TPM module.  Yes, the TPM identity and
integrity
will still be rooted in its manufacturer (IBM, Intel, Asus, SiS,
whomever).
But hell, we're already trusting them not to put tcpstacks into the
BIOS
for PAL chips to talk to their evil bosses back in [fill in location of
your
favorite evil empire, here]. Oh, wait a minute - Phoenix is working
on that, too, aren't they?
 
I see the TPM configuration management tool as a way to provide
a trusted boot path, complete with automagical inventory of approved
hardware devices, so that evaluated operating systems, like Solaris
and Linux, can know whether they're running on hardware whose firmware
and circuitry are known (or believed) not to have been subverted, or to
have
certain EMI / Tempest characteristics.  Mass market delivery of
what are ususally statically configured systems that still retain
their
C2/CC-EAL4 ratings.
 
But more important is where TPM and TCPA lead Intel and IBM, towards
increasing virtualization of commodity hardware, like Intel's LeGrand
strategy to restore a trusted protection ring (-1) to their
processors,
which will make it easier to get real, proper virtualization with
trusted
hypervisors back into common use.
 
The fact that Hollywood thinks they can use the technology, and thus
they're willing to underwrite its development, is fortuitous, as long
as
the trust is based on open transparent reviews and certifications.
 
Maybe the FSF and EFF will create their own certification program, to
review and bless TPM ring -1 implementations, just to satsify the
slashdot crowd...
 
Maybe they should.
 
Ed

 William Arbaugh [EMAIL PROTECTED] 12/18/2003 5:33:00 PM 


On Dec 16, 2003, at 5:14 PM, David Wagner wrote:

 Jerrold Leichter  wrote:
 We've met the enemy, and he is us.  *Any* secure computing kernel 
 that can do
 the kinds of things we want out of secure computing kernels, can
also 
 do the
 kinds of things we *don't* want out of secure computing kernels.

 I don't understand why you say that.  You can build perfectly good
 secure computing kernels that don't contain any support for remote
 attribution.  It's all about who has control, isn't it?


There is no control of your system with remote attestation. Remote 
attestation simply allows the distant end of a communication to 
determine if your configuration is acceptable for them to communicate 
with you. As such, remote attestation allows communicating parties to 
determine with whom they communicate or share services. In that 
respect, it is just like caller id. People should be able to either 
attest remotely, or block it just like caller id. Just as the distant 
end can choose to accept or not accept the connection.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to
[EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-22 Thread David Wagner
William Arbaugh  wrote:
On Dec 16, 2003, at 5:14 PM, David Wagner wrote:
 Jerrold Leichter  wrote:
 We've met the enemy, and he is us.  *Any* secure computing kernel 
 that can do
 the kinds of things we want out of secure computing kernels, can also 
 do the
 kinds of things we *don't* want out of secure computing kernels.

 I don't understand why you say that.  You can build perfectly good
 secure computing kernels that don't contain any support for remote
 attribution.  It's all about who has control, isn't it?

There is no control of your system with remote attestation. Remote 
attestation simply allows the distant end of a communication to 
determine if your configuration is acceptable for them to communicate 
with you.

But you missed my main point.  Leichter claims that any secure kernel is
inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
My main point is that this is simply not so.

There are two very different pieces here: that of a secure kernel, and
that of remote attestation.  They are separable.  TCPA and Palladium
contain both pieces, but that's just an accident; one can easily imagine
a Palladium-- that doesn't contain any support for remote attestation
whatsoever.  Whatever you think of remote attestation, it is separable
from the goal of a secure kernel.

This means that we can have a secure kernel without all the harms.
It's not hard to build a secure kernel that doesn't provide any form of
remote attestation, and almost all of the alleged harms would go away if
you remove remote attestation.  In short, you *can* have a secure kernel
without having all the kinds of things we don't want.  Leichter's claim
is wrong.

This is an important point.  It seems that some TCPA and Palladium
advocates would like to tie together security with remote attestion; it
appears they would like you to believe you can't have a secure computer
without also enabling DRM, lock-in, and the other harms.  But that's
simply wrong.  We can have a secure computer without enabling all the
alleged harms.  If we don't like the effects of TCPA and Palladium,
there's no reason we need to accept them.  We can have perfectly good
security without TCPA or Palladium.

As for remote attestion, it's true that it does not directly let a remote
party control your computer.  I never claimed that.  Rather, it enables
remote parties to exert control over your computer in a way that is
not possible without remote attestation.  The mechanism is different,
but the end result is similar.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-22 Thread Carl Ellison
Seth,

that was a very good and interesting reply.  Thank you.

IBM has started rolling out machines that have a TPM installed.  If
other companies do that too (and there might be others that do already -
since I don't follow this closely) then gradually the installed base of
TPM-equipped machines will grow.  It might take 10 years - or even more -
before every machine out there has a TPM.  However, that day may well come.
Then again, TPMs cost money and I don't know any private individuals who are
willing to pay extra for a machine with one.  Given that, it is unlikely
that TPMs will actually become a popular feature.

Some TPM-machines will be owned by people who decide to do what I
suggested: install a personal firewall that prevents remote attestation.
With wider dissemination of your reasoning, that number might be higher than
it would be otherwise.

Meanwhile, there will be hackers who accept the challenge of
defeating the TPM.  There will be TPM private keys loose in the world,
operated by software that has no intention of telling the truth to remote
challengers.  There might even be one or more web services out there with a
pool of such keys, offering to do an attestation for you telling whatever
lie you want to tell.  With such a service in operation, it is doubtful that
a service or content provider would put much faith in remote attestation -
and that, too, might kill the effort.

At this point, a design decision by the TCPA (TCG) folks comes into
play.  There are ways to design remote attestation that preserve privacy and
there are ways that allow linkage of transactions by the same TPM.  If the
former is chosen, then the web service needs very few keys.  If the privacy
protection is perfect, then the web service needs only 1 key.  If the
privacy violation is very strong, then the web service won't work, but the
TCG folks will have set themselves up for a massive political campaign
around its violation of user privacy.

Either of these outcomes will kill the TCG, IMHO.

This is the reason that, when I worked for a hardware company active
in the TCPA(TCG), I argued strongly against supporting remote attestation.
I saw no way that it could succeed.

Meanwhile, I am no longer in that company.  I have myself to look
out for.  If I get a machine with a TPM, I will make sure I have the
firewall installed.  I will use the TPM for my own purposes and let the rest
of the world think that I have an old machine with no TPM.

You postulated that someday, when the TPM is ubiquitous, some
content providers will demand remote attestation.  I claim it will never
become ubiquitous, because of people making my choice - and because it takes
a long time to replace the installed base - and because the economic model
for TPM deployment is seriously flawed.  If various service or content
providers elect not to allow me service unless I do remote attestation, I
then have 2 choices: use the friendly web service that will lie for me - or
decline the content or service.

The scare scenario you paint is one in which I am the lone voice of
concern floating in a sea of people who will happily give away their privacy
and allow some service or content provider to demand this technology on my
end.  In such a society, I would stand out and be subject to discrimination.
This is not a technical problem. This is a political problem. If that is a
real danger, then we need to educate those people.

RIAA and MPAA have been hoping for some technological quick fix to
let them avoid facing the hard problem of dealing with people who don't
think the way they would like people to think.  It seems to me that you and
John Gilmore and others are doing exactly the same thing - hoping for
technological censorship to succeed so that you can avoid facing the hard
problem of dealing with people who don't think the way they should (in this
case, the people who happily give away their privacy and accept remote
attestation in return for dancing pigs).  I don't have the power to stop
this technology if folks decide to field it.  I have only my own reason and
skills.

 - Carl


+--+
|Carl M. Ellison [EMAIL PROTECTED]  http://theworld.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+ 

 -Original Message-
 From: Seth David Schoen [mailto:[EMAIL PROTECTED] On Behalf Of 
 Seth David Schoen
 Sent: Sunday, December 21, 2003 3:03 PM
 To: Carl Ellison
 Cc: 'Stefan Lucks'; [EMAIL PROTECTED]
 Subject: Re: Difference between TCPA-Hardware and a smart 
 card (was: example: secure computing kernel needed)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Peter Gutmann
Stefan Lucks [EMAIL PROTECTED] writes:

Currently, I have three smart cards in my wallet, which I did not want to own
and which I did never pay for. I never used any of them. 

Conversation from a few years ago, about multifunction smart cards:

 - Multifunction smart cards are great, because they'll reduce the number of
[smart] cards we'll have to carry around.

 - I'm carrying zero smart cards, so it's working already!

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Ben Laurie
Carl Ellison wrote:
It is an advantage for a TCPA-equipped platform, IMHO.  Smart cards cost
money. Therefore, I am likely to have at most 1.
If I glance quickly through my wallet, I find 7 smartcards (all credit 
cards). Plus the one in my phone makes 8. So, run that at most 1 
argument past me again?

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Carl Ellison
We see here a difference between your and my sides of the Atlantic.  Here in
the US, almost no one has a smart card.

Of those cards you carry, how many are capable of doing public key
operations?  A simple memory smartcard doesn't count for what we were
talking about.

There are other problems with doing TCPA-like operations with a smartcard,
but I didn't go into those.  The biggest one to chew on is that I, the
computer owner, need verification that my software is in good shape.  My
agent in my computer (presumably the smartcard) needs a way to examine the
software state of my computer without relying on any of the software in my
computer (which might have been corrupted, if the computer's S/W has been
corrupted).  This implies to me that my agent chip needs a H/W path for
examining all the S/W of my computer.  That's something the TPM gives us
that a smartcard doesn't (when that smartcard goes through a normal device
driver to access its machine).

 - Carl


+--+
|Carl M. Ellison [EMAIL PROTECTED]  http://theworld.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+ 

 -Original Message-
 From: Ben Laurie [mailto:[EMAIL PROTECTED] 
 Sent: Friday, December 19, 2003 2:42 AM
 To: Carl Ellison
 Cc: 'Stefan Lucks'; [EMAIL PROTECTED]
 Subject: Re: Difference between TCPA-Hardware and a smart 
 card (was: example: secure computing kernel needed)
 
 Carl Ellison wrote:
  It is an advantage for a TCPA-equipped platform, IMHO.  
 Smart cards cost
  money. Therefore, I am likely to have at most 1.
 
 If I glance quickly through my wallet, I find 7 smartcards 
 (all credit 
 cards). Plus the one in my phone makes 8. So, run that at most 1 
 argument past me again?
 
 Cheers,
 
 Ben.
 
 -- 
 http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
 
 There is no limit to what a man can do or how far he can go if he
 doesn't mind who gets the credit. - Robert Woodruff
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Carl Ellison
Stefan,

I replied to much of this earlier, so I'll skip those parts.

 - Carl

+--+
|Carl M. Ellison [EMAIL PROTECTED]  http://theworld.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+ 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Stefan Lucks
 Sent: Tuesday, December 16, 2003 1:02 AM
 To: Carl Ellison
 Cc: [EMAIL PROTECTED]
 Subject: RE: Difference between TCPA-Hardware and a smart 
 card (was: example: secure computing kernel needed)
 
 On Mon, 15 Dec 2003, Carl Ellison wrote:


 The point is that Your system is not supposed to prevent You 
 from doing
 anything I want you not to do! TCPA is supposed to lock You 
 out of some
 parts of Your system.

This has nothing to do with the TCPA / TPM hardware. This is a political
argument about the unclean origins of TCPA (as an attempt to woo Hollywood).

I, meanwhile, never did buy the remote attestation argument for high price
content.  It doesn't work.  So, I looked at this as an engineer.  OK, I've
got this hardware. If remote attestation is worthless, then I can and should
block that (e.g., with a personal firewall).  Now, if I do that, do I have
anything of value left?  My answer was that I did - as long as I could
attest about the state of the software to myself, the machine owner.

This required putting the origins of the project out of my head while I
thought about the engineering.  That took effort, but paid off (to me).

 
 
 [...]
  If it were my machine, I would never do remote attestation. 
  With that
  one choice, I get to reap the personal advantages of the TPM while
  disabling its behaviors that you find objectionable 
 (serving the outside
  master).
 
 I am not sure, whether I fully understand you. If you mean that TCPA
 comes with the option to run a secure kernel where you (as 
 the owner and
 physical holder of the machine running) have full control 
 over what the
 system is doing and isn't doing -- ok, that is a nice thing. 
 On the other
 hand, we would not need a monster such as TCPA for this.

What we need is some agent of mine - a chip - that:
1) has access to the machine guts, so it can verify S/W state
2) has a cryptographic channel to me, so it can report that result to me
and
3) has its own S/W in a place where no attacker could get to it, even if
that attacker had complete control over the OS.

The TCPA/TPM can be used that way.  Meanwhile, the TPM has no channel to the
outside world, so it is not capable of doing remote attestation by itself.
You need to volunteer to allow such communications to go through. If you
don't like them, then block them.  Problem solved.  This reminds me of the
abortion debate bumper sticker.  If you're against abortion, don't have
one.

 - Carl

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Ernst Lippe
On Mon, 15 Dec 2003 19:02:06 -0500 (EST)
Jerrold Leichter [EMAIL PROTECTED] wrote:

 However, this advantage is there only because there are so few smart cards,
 and so few smart card enabled applications, around.

It is not really true that there are so few smartcards. Almost every
mobile phone contains one (the SIM module is a smartcard).

Also the situation in Europe is quite different from the USA.
Electronic purses on smart cards are pretty common here, especially in
France and the Netherlands, where most adults have at least one.

But it is true that there are only very few smart card enabled
applications.  I have worked on several projects for multifunctional
use of these smart cards and almost all of them were complete failures.

Ernst Lippe

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-20 Thread Anne Lynn Wheeler
At 10:51 AM 12/16/2003 +0100, Stefan Lucks wrote:

I agree with you: A good compromise between security and convenience is an
issue, when you are changing between different smart cards. E.g., I could
imagine using the smart card *once* when logging into my bank account,
and then only needing it, perhaps, to authorise a money transfer.
This is a difficult user interface issue, but something we should be able
to solve.
One problem of TCPA is the opposite user interface issue -- the user has
lost control over what is going on. (And I believe that this originates
much of the resistance against TCPA.)
In sci.crtypt, there has been a thread discussing does OTP (one-time-pad) 
and how does integrity and authentication play and somewhat subtread about 
does authentication of a message  involve checking the integrity of the 
contents and/or checking the origin of message. A security taxonomy, PAIN:
* privacy (aka thinks like encryption)
* authentication (origin)
* integrity (contents)
* non-repudiation

http://www.garlic.com/~lynn/2003p.html#4 Does OTP need authentication?
http://www.garlic.com/~lynn/2003p.html#6 Does OTP need authentication?
http://www.garlic.com/~lynn/2003p.html#17 Does OTP need authentication?
One of the issues is that privacy, authentication, and integrity are 
totally different business processes and that the same technology, lets say 
involving keys might be involved in all three, aka digital signatures ( 
public/private keys) can be used to simultaneously provide for 
authentication (of sender) and integrity )of message contents).

Both privacy (encryption) and authentication (say digital signatures) can 
involve keys that need protecting; privacy because key access needs to be 
controlled to prevent unauthorized access to data, authentication because 
unauthorized access to keys could lead to impersonation.

In the authentication case, involving public/private keys  the business 
requirement has sometimes led to guidelines that the private key is 
absolutely protected and things like key escrow is not allowed because it 
could contributed to impersonation.

In the privacy csse, involving public/private keys  ... the business 
requirement can lead to guidelines that require mandated escrow of private 
key(s) because of  business continuity issues.

This can create ambiguity where the same technology can be used for both 
authentication and privacy, but because the business processes are 
different, there can be mandated requirement that the same keys are never 
used for both authentication and privacy ... and it is mandated that 
authentication keys are never escrowed and that privacy keys are always 
escrowed.

TCPA chip can also be used to protect private keys used in authentication 
 either authentication of the hardware component as its own entity  
say like a router in a large network, or possibly implied authentication of 
a person that owns or possesses the hardware component.

An authentication taxonomy is 3-factor authentication:
* something you have
* something you know
* something you are
A hardware token (possibly in chipcard form factor) can be designed to 
generate a unique public/private key pair inside the token and that the 
private key never leaves the chip. Any digital signature that can be 
verified by the corresponding public key can be used to imply something 
you have authentication (i.e. the digital signature is assumed to have 
originated from a specific hardware token). A hardware token can also be 
designed to only operate in specific way when the correct PIN/password has 
been entered  in which case the digital signature can imply two-factor 
authentication, both something you have and something you know.

From a business process standpoint it would be perfectly consistent to 
mandate that there is never key escrow for keys involved in authentication 
business process while at the same time mandating key escrow for keys 
involved in privacy.

At issue in business continuity are business requirements for things like 
no single point of failure,  offsite storage of backups, etc. The threat 
model is 1) data in business files can be one of its most valuable assets, 
2) it can't afford to have unauthorized access to the data, 3) it can't 
afford to loose access to data, 4) encryption is used to help prevent 
unauthorized access to the data, 5) if the encryption keys are protected by 
a TCPA chip, are the encryption keys recoverable if the TCPA chip fails?

--
Anne  Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Pat Farrell
At 07:02 PM 12/15/2003 -0500, Jerrold Leichter wrote:
However, this advantage is there only because there are so few smart cards,
and so few smart card enabled applications, around.
A software only, networked smart card would solve the
chicken and egg problem. One such solution is
Tamper resistant method and apparatus, [Ellison], USPTO 6,073,237
(Do a patent number search at http://www.uspto.gov/patft/index.html)
Carl invented this as an alternative to Smartcards back in the SET
development days.
Pat

Pat Farrell [EMAIL PROTECTED]
http://www.pfarrell.com
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Stefan Lucks
On Mon, 15 Dec 2003, Jerrold Leichter wrote:

 | This is quite an advantage of smart cards.
 However, this advantage is there only because there are so few smart cards,
 and so few smart card enabled applications, around.

Strangely enough, Carl Ellison assumed that you would have at most one
smart card, anyway. I'd rather think you are right, here.

 Really secure mail *should* use its own smart card.  When I do banking, do
 I have to remove my mail smart card?  Encryption of files on my PC should
 be based on a smart card.  Do I have to pull that one out?  Does that mean
 I can't look at my own records while I'm talking to my bank?  If I can only
 have one smart card in my PC at a time, does that mean I can *never* cut and
 paste between my own records and my on-line bank statement?  To access my
 files and my employer's email system, do I have to have to trust a single
 smart card to hold both sets of secrets?

I agree with you: A good compromise between security and convenience is an
issue, when you are changing between different smart cards. E.g., I could
imagine using the smart card *once* when logging into my bank account,
and then only needing it, perhaps, to authorise a money transfer.

This is a difficult user interface issue, but something we should be able
to solve.

One problem of TCPA is the opposite user interface issue -- the user has
lost control over what is going on. (And I believe that this originates
much of the resistance against TCPA.)

 Ultimately, to be useful a trusted kernel has to be multi-purpose, for
 exactly the same reason we want a general-purpose PC, not a whole bunch
 of fixed- function appliances.  Whether this multi-purpose kernel will
 be inside the PC, or a separate unit I can unplug and take with me, is a
 separate issue. Give the current model for PC's, a separate key is
 probably a better approach.

Agreed!

 However, there are already experiments with PC in my pocket designs:
 A small box with the CPU, memory, and disk, which can be connect to a
 small screen to replace a palmtop, or into a unit with a big screen, a
 keyboard, etc., to become my desktop.  Since that small box would have
 all my data, it might make sense for it to have the trusted kernel.
 (Of course, I probably want *some* part to be separate to render the box
 useless is stolen.)

Agreed again!

 | There is nothing wrong with the idea of a trusted kernel, but trusted
 | means that some entity is supposed to trust the kernel (what else?). If
 | two entities, who do not completely trust each other, are supposed to both
 | trust such a kernel, something very very fishy is going on.
 Why?  If I'm going to use a time-shared machine, I have to trust that the
 OS will keep me protected from other users of the machine.  All the other
 users have the same demands.  The owner of the machine has similar demands.

Actually, all users have to trust the owner (or rather the sysadmin).

The key words are have to trust! As you wrote somewhere below:

 Part of the issue with TCPA is that the providers of the kernel that we
 are all supposed to trust blindly are also going to be among those who
 will use it heavily.  Given who those producers are, that level of trust
 is unjustifiable.

I entirely agree with you!

 | More than ten years ago, Chaum and Pedersen

[...]

 |+---+ +-+ +---+
 || Outside World | - | Your PC | - | TCPA-Observer |
 |+---+ +-+ +---+
 |
 | TCPA mixes Your PC and the observer into one trusted kernel and is
 | thus open to abuse.

 I remember looking at this paper when it first appeared, but the details
 have long faded.  It's an alternative mechanism for creating trust:
 Instead of trusting an open, independently-produced, verified
 implementation, it uses cryptography to construct walls around a
 proprietary, non-open implementation that you have no reason to trust.

Please re-read the paper!

First, it is not a mechanism for *creating* trust.

It is rather a trust-avoidance mechanism! You are not trusting the
observer at all, and you don't need to. The outsider is not trusting you
or your PC at all, and she donesn't need to.

Second, how on earth did you get the impression that Chaum/Pedersen is
about proprietary non open implenentations?

Nothing stops people from producing independent and verified
implementations. As a matter of fact, since people can concentrate on
writing independent and verified implementations for the sofware on Your
PC, providing an independently produced and verified implementation woud
be much much simpler than ever providing such an implementation for the
TCPA hardware.

Independent implementations of the observer's soft- and hardware are
simpler than in the case of TCPA as well, but this is a minor issue. You
don't need to trust the observer, so you don't care about independent and
verified implementations.

With a Chaum/Pedersen style scheme, the 

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-15 Thread Jerrold Leichter
|  Which brings up the interesting question:  Just why are the reactions to
|  TCPA so strong?  Is it because MS - who no one wants to trust - is
|  involved?  Is it just the pervasiveness:  Not everyone has a smart card,
|  but if TCPA wins out, everyone will have this lump inside of their
|  machine.
|
| There are two differences between TCPA-hardware and a smart card.
|
| The first difference is obvious. You can plug in and later remove a smart
| card at your will, at the point of your choice. Thus, for homebanking with
| bank X, you may use a smart card, for homebaning with bank Y you
| disconnect the smart card for X and use another one, and before online
| gambling you make sure that none of your banking smart cards is connected
| to your PC. With TCPA, you have much less control over the kind of stuff
| you are using.
|
| This is quite an advantage of smart cards.
However, this advantage is there only because there are so few smart cards,
and so few smart card enabled applications, around.

Really secure mail *should* use its own smart card.  When I do banking, do
I have to remove my mail smart card?  Encryption of files on my PC should
be based on a smart card.  Do I have to pull that one out?  Does that mean
I can't look at my own records while I'm talking to my bank?  If I can only
have one smart card in my PC at a time, does that mean I can *never* cut and
paste between my own records and my on-line bank statement?  To access my
files and my employer's email system, do I have to have to trust a single
smart card to hold both sets of secrets?

I just don't see this whole direction of evolution as being viable.  Oh,
we'll pass through that stage - and we'll see products that let you connect
multiple smart cards at once, each guaranteed secure from the others.  But
that kind of add-on is unlikely to really *be* secure.

Ultimately, to be useful a trusted kernel has to be multi-purpose, for exactly
the same reason we want a general-purpose PC, not a whole bunch of fixed-
function appliances.  Whether this multi-purpose kernel will be inside the PC,
or a separate unit I can unplug and take with me, is a separate issue. Give
the current model for PC's, a separate key is probably a better approach.
However, there are already experiments with PC in my pocket designs:  A
small box with the CPU, memory, and disk, which can be connect to a small
screen to replace a palmtop, or into a unit with a big screen, a keyboard,
etc., to become my desktop.  Since that small box would have all my data, it
might make sense for it to have the trusted kernel.  (Of course, I probably
want *some* part to be separate to render the box useless is stolen.)

| The second point is perhaps less obvious, but may be more important.
| Usually, *your* PC hard- and software is supposed to to protect *your*
| assets and satisfy *your* security requirements. The trusted hardware
| add-on in TCPA is supposed to protect an *outsider's* assets and satisfy
| the *outsider's* security needs -- from you.
|
| A TCPA-enhanced PC is thus the servant of two masters -- your servant
| and the outsider's. Since your hardware connects to the outsider directly,
| you can never be sure whether it works *against* you by giving the
| outsider more information about you than it should (from your point if
| view).
|
| There is nothing wrong with the idea of a trusted kernel, but trusted
| means that some entity is supposed to trust the kernel (what else?). If
| two entities, who do not completely trust each other, are supposed to both
| trust such a kernel, something very very fishy is going on.
Why?  If I'm going to use a time-shared machine, I have to trust that the
OS will keep me protected from other users of the machine.  All the other
users have the same demands.  The owner of the machine has similar demands.

The same goes for any shared resource.  A trusted kernel should provide some
isolation guarantees among contexts.  These guarantees should be independent
of the detailed nature of the contexts.  I think we understand pretty well
what the *form* of these guarantees should be.  We do have problems actually
implementing such guarantees in a trustworthy fashion, however.

Part of the issue with TCPA is that the providers of the kernel that we are
all supposed to trust blindly are also going to be among those who will use it
heavily.  Given who those producers are, that level of trust is unjustifiable.

However, suppose that TCPA (or something like it) were implemented entirely by
independent third parties, using open techniques, and that they managed to
produce both a set of definitions of isolation, and an implementation, that
were widely seen to correctly specify, embody, and enforce strict protection.
How many of the criticisms of TCPA would that mute?  Some:  Given open
standards, a Linux TCPA-based computing platform could be produced.
Microsoft's access the the trusted kernel would be exactly the same as
everyone else's; there would be no 

Re: example: secure computing kernel needed

2003-12-14 Thread Paul A.S. Ward
I'm not sure why no one has considered the PC banking problem to be a
justification for secure computing.  Specifically, how does a user know
their computer has not been tampered with when they wish to use it for
banking access.
Paul

John S. Denker wrote:

Previous discussions of secure computing technology have
been in some cases sidetracked and obscured by extraneous
notions such as
 -- Microsoft is involved, therefore it must be evil.
 -- The purpose of secure computing is DRM, which is
intrinsically evil ... computers must be able to
copy anything anytime.
Now, in contrast, here is an application that begs for
a secure computing kernel, but has nothing to do with
microsoft and nothing to do with copyrights.
Scenario:  You are teaching chemistry in a non-anglophone
country.  You are giving an exam to see how well the
students know the periodic table.
 -- You want to allow students to use their TI-83 calculators
for *calculating* things.
 -- You want to allow the language-localization package.
 -- You want to disallow the app that stores the entire
periodic table, and all other apps not explicitly
approved.
The hardware manufacturer (TI) offers a little program
that purports to address this problem
  http://education.ti.com/us/product/apps/83p/testguard.html
but it appears to be entirely non-cryptologic and therefore
easily spoofed.
I leave it as an exercise for the reader to design a
calculator with a secure kernel that is capable of
certifying something to the effect that no apps and
no data tables (except for ones with the following
hashes) have been accessible during the last N hours.
Note that I am *not* proposing reducing the functionality
of the calculator in any way.  Rather I am proposing a
purely additional capability, namely the just-mentioned
certification capability.
I hope this example will advance the discussion of secure
computing.  Like almost any powerful technology, we need
to discuss
 -- the technology *and*
 -- the uses to which it will be put
... but we should not confuse the two.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to 
[EMAIL PROTECTED]


--

Paul A.S. Ward, Assistant Professor  Email: [EMAIL PROTECTED]
University of Waterloo  [EMAIL PROTECTED]
Department of Computer Engineering   Tel: +1 (519) 888-4567 ext.3127
Waterloo, OntarioFax: +1 (519) 746-3077
Canada N2L 3G1   URL: http://www.ccng.uwaterloo.ca/~pasward


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Bill Stewart
At 02:41 PM 12/14/2003 +, Dave Howe wrote:
Paul A.S. Ward wrote:
 I'm not sure why no one has considered the PC banking problem to be a
 justification for secure computing.  Specifically, how does a user
 know their computer has not been tampered with when they wish to use
 it for banking access.
I think PC banking is an argument *against* Secure Computing as currently
proposed - there is no way to discover if there is a nasty running in
protected memory or removing it if there is.
Agreed.  It's a better argument for booting from a known CDROM distribution.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Anne Lynn Wheeler
At 07:25 PM 12/11/2003 -0500, Paul A.S. Ward wrote:
I'm not sure why no one has considered the PC banking problem to be a
justification for secure computing.  Specifically, how does a user know
their computer has not been tampered with when they wish to use it for
banking access.
actually the EU FINREAD (financial reader) standard is quite directed at 
this area. basically a secure entry/display\token-interface device. part of 
the issue is not skimming any pin-entry that may be assumed as possible 
with just about all keyboard-based entry (aka tamper evident device  
supposedly somewhat consumer equivalent of the TSM ... trusted security 
module and tamper evident guidelines for point-of-sale terminals). In 
effect, finread is isolating some set of secure components into a tamper 
evident housing that has something akin to a trusted security module.

the other aspect somewhat shows up in the digital signature area. 
fundamentally a digital signature may be used for authenticating (and 
message integrity) ... but not, by itself as to agreement in the legal 
signature sense. the issue is how to create an environment/infrastructure 
for supporting both straight-forward authentication as well as 
intention/agreement

in theory finread has the ability to securely display the value of a 
transaction (and possibly other necessary details) and then requires a PIN 
entry after the display as evidence of

1) something you know authentication
2) being able to infer agreement with the transaction.
pretty much assumed is that finread implies some sort of token acceptor 
device ... which in turn implies a something you have token authentication.

so finread is attempting to both address two-factor authentication (and 
possibly three if biometric is also supported) as well as establish some 
environment related for inferring agreement/intention/etc as required per 
legal signature.

possibly overlooked in the base eu finread work is being able to prove that 
the transaction actually took place with a real finread device as opposed 
to some other kind of environment. In the (financial standard) X9A10 
working group on the X9.59 financial standard for all electronic retail 
payments we spent some amount of time on not precluding that the signing 
environment could also sign the transaction i.e.

1) amount displayed on secure secure display,
2) pin/biometric securely entered (after display occurs)
3) token digitally signs (after pin/biometric entered)
4) finread terminal digital signs
the 2nd  3rd items (alone) are two (or three) factor authentication. 
however, in conjunction with the first and fourth items some level of 
assurance that the person agrees with the transaction.

lots of past finread references:
http://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? 
Photo ID's and Payment Infrastructure
http://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
http://www.garlic.com/~lynn/aepay11.htm#54 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/aepay11.htm#55 FINREAD ... and as an aside
http://www.garlic.com/~lynn/aepay11.htm#56 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, 
here's your private key
http://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
http://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative 
to PKI?
http://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and 
their users [was Re: Cryptogram:  Palladium Only for DRM]
http://www.garlic.com/~lynn/aadsm14.htm#35 The real problem that https has 
conspicuously failed to fix
http://www.garlic.com/~lynn/aadsm15.htm#40 FAQ: e-Signatures and Payments
http://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel 
Borenstein: Carnivore's Magic Lantern
http://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
http://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
http://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
http://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
http://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security 
requested
http://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security 
requested
http://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet 
Banking

example: secure computing kernel needed

2003-12-11 Thread John S. Denker
Previous discussions of secure computing technology have
been in some cases sidetracked and obscured by extraneous
notions such as
 -- Microsoft is involved, therefore it must be evil.
 -- The purpose of secure computing is DRM, which is
intrinsically evil ... computers must be able to
copy anything anytime.
Now, in contrast, here is an application that begs for
a secure computing kernel, but has nothing to do with
microsoft and nothing to do with copyrights.
Scenario:  You are teaching chemistry in a non-anglophone
country.  You are giving an exam to see how well the
students know the periodic table.
 -- You want to allow students to use their TI-83 calculators
for *calculating* things.
 -- You want to allow the language-localization package.
 -- You want to disallow the app that stores the entire
periodic table, and all other apps not explicitly
approved.
The hardware manufacturer (TI) offers a little program
that purports to address this problem
  http://education.ti.com/us/product/apps/83p/testguard.html
but it appears to be entirely non-cryptologic and therefore
easily spoofed.
I leave it as an exercise for the reader to design a
calculator with a secure kernel that is capable of
certifying something to the effect that no apps and
no data tables (except for ones with the following
hashes) have been accessible during the last N hours.
Note that I am *not* proposing reducing the functionality
of the calculator in any way.  Rather I am proposing a
purely additional capability, namely the just-mentioned
certification capability.
I hope this example will advance the discussion of secure
computing.  Like almost any powerful technology, we need
to discuss
 -- the technology *and*
 -- the uses to which it will be put
... but we should not confuse the two.
-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]