Re: Quantum computers inch closer?

2002-08-31 Thread AARG!Anonymous

Bear writes:
 In this case you'd need to set up the wires-and-gates model
 in the QC for two ciphertext blocks, each attached to an
 identical plaintext-recognizer function and attached to the
 same key register.  Then you set up the entangled state,
 and collapse the eigenvector on the eigenstate where the
 ciphertext for block A and block B is produced, and the
 plaintext recognizer for both block A and block B return
 1, and then you'd read the plaintext and key out of the
 appropriate locations (dots?) in the qchip.

The problem is that you can't forcibly collapse the state vector into your
wished-for eigenstate, the one where the plaintext recognizer returns a 1.
Instead, it will collapse into a random state, associated with a random
key, and it is overwhelmingly likely that this key is one for which the
recognizer returns 0.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cryptographic privacy protection in TCPA

2002-08-17 Thread AARG!Anonymous

Dr. Mike wrote, patiently, persistently and truthfully:

 On Fri, 16 Aug 2002, AARG! Anonymous wrote:

  Here are some more thoughts on how cryptography could be used to
  enhance user privacy in a system like TCPA.  Even if the TCPA group
  is not receptive to these proposals, it would be useful to have an
  understanding of the security issues.  And the same issues arise in
  many other kinds of systems which use certificates with some degree
  of anonymity, so the discussion is relevant even beyond TCPA.

 OK, I'm going to discuss it from a philosophical perspective.
 i.e. I'm just having fun with this.

Fine, but let me put this into perspective.  First, although the
discussion is in terms of a centralized issuer, the same issues arise if
there are multiple issuers, even in a web-of-trust situation.  So don't
get fixated on the fact that my analysis assumed a single issuer -
that was just for simplicity in what was already a very long message.

The abstract problem to be solved is this: given that there is some
property which is being asserted via cryptographic certificates
(credentials), we want to be able to show possession of that property
in an anonymous way.  In TCPA the property is being a valid TPM.
Another example would be a credit rating agency who can give out a good
credit risk credential.  You want to be able to show it anonymously in
some cases.  Yet another case would be a state drivers license agency
which gives out an over age 21 credential, again where you want to be
able to show it anonymously.

This is actually one of the oldest problems which proponents of
cryptographic anonymity attempted to address, going back to David Chaum's
seminal work.  TCPA could represent the first wide-scale example of
cryptographic credentials being shown anonymously.  That in itself ought
to be of interest to cypherpunks.  Unfortunately TCPA is not going for
full cryptographic protection of anonymity, but relying on Trusted Third
Parties in the form of Privacy CAs.  My analysis suggests that although
there are a number of solutions in the cryptographic literature, none of
them are ideal in this case.  Unless we can come up with a really strong
solution that satisfies all the security properties, it is going to be
hard to make a case that the use of TTPs is a mistake.


 I don't like the idea that users *must* have a certificate.  Why
 can't each person develop their own personal levels of trust and
 associate it with their own public key?  Using multiple channels,
 people can prove their key is their word.  If any company wants to
 associate a certificate with a customer, that can have lots of meanings
 to lots of other people.  I don't see the usefullness of a permanent
 certificate.  Human interaction over electronic media has to deal
 with monkeys, because that's what humans are :-)

A certificate is a standardized and unforgeable statement that some
person or key has a particular property, that's all.  The kind of system
you are talking about, of personal knowledge and trust, can't really be
generalized to an international economy.


  Actually, in this system the Privacy CA is not really protecting
  anyone's privacy, because it doesn't see any identities.  There is no
  need for multiple Privacy CAs and it would make more sense to merge
  the Privacy CA and the original CA that issues the permanent certs.
  That way there would be only one agency with the power to forge keys,
  which would improve accountability and auditability.

 I really, REALLY, *REALLY*, don't like the idea of one entity having
 the ability to create or destroy any persons ability to use their
 computer at whim.  You are suggesting that one person (or small group)
 has the power to create (or not) and revoke (or not!) any and all TPM's!

 I don't know how to describe my astoundment at the lack of comprehension
 of history.

Whoever makes a statement about a property should have the power to
revoke it.  I am astounded that you think this is a radical notion.

If one or a few entities become widely trusted to make and revoke
statements that people care about, it is because they have earned that
trust.  If the NY Times says something is true, people tend to believe it.

If Intel says that such-and-such a key is in a valid TPM, people may
choose to believe this based on Intel's reputation.  If Intel later
determines that the key has been published on the net and so can no
longer be presumed to be a TPM key, it revokes its statement.

This does not mean that Intel would destroy any person's ability to use
their computer on a whim.  First, having the TPM cert revoked would not
destroy your ability to use your computer; at worst you could no longer
persuade other people of your trustworthiness.  And second, Intel would
not make these kind of decision on a whim, any more than the NY Times
would publish libelous articles on a whim; doing so would risk destroying
the company's reputation, one of its most valuable assets.

I can't

Cryptographic privacy protection in TCPA

2002-08-16 Thread AARG!Anonymous

Here are some more thoughts on how cryptography could be used to
enhance user privacy in a system like TCPA.  Even if the TCPA group
is not receptive to these proposals, it would be useful to have an
understanding of the security issues.  And the same issues arise in
many other kinds of systems which use certificates with some degree
of anonymity, so the discussion is relevant even beyond TCPA.

The basic requirement is that users have a certificate on a long-term key
which proves they are part of the system, but they don't want to show that
cert or that key for most of their interactions, due to privacy concerns.
They want to have their identity protected, while still being able to
prove that they do have the appropriate cert.  In the case of TCPA the
key is locked into the TPM chip, the endorsement key; and the cert
is called the endorsement certificate, expected to be issued by the
chip manufacturer.  Let us call the originating cert issuer the CA in
this document, and the long-term cert the permanent certificate.

A secondary requirement is for some kind of revocation in the case
of misuse.  For TCPA this would mean cracking the TPM and extracting
its key.  I can see two situations where this might lead to revocation.
The first is a global crack, where the extracted TPM key is published
on the net, so that everyone can falsely claim to be part of the TCPA
system.  That's a pretty obvious case where the key must be revoked for
the system to have any integrity at all.  The second case is a local
crack, where a user has extracted his TPM key but keeps it secret, using
it to cheat the TCPA protocols.  This would be much harder to detect,
and perhaps equally significantly, much harder to prove.  Nevertheless,
some way of responding to this situation is a desirable security feature.

The TCPA solution is to use one or more Privacy CAs.  You supply your
permanent cert and a new short-term identity key; the Privacy CA
validates the cert and then signs your key, giving you a new cert on the
identity key.  For routine use on the net, you show your identity cert
and use your identity key; your permanent key and cert are never shown
except to the Privacy CA.

This means that the Privacy CA has the power to revoke your anonymity;
and worse, he (or more precisely, his key) has the power to create bogus
identities.  On the plus side, the Privacy CA can check a revocation list
and not issue a new identity cert of the permanent key has been revoked.
And if someone has done a local crack and the evidence is strong enough,
the Privacy CA can revoke his anonymity and allow his permanent key to
be revoked.

Let us now consider some cryptographic alternatives.  The first is to
use Chaum blinding for the Privacy CA interaction.  As before, the user
supplies his permanent cert to prove that he is a legitimate part of
the system, but instead of providing an identity key to be certified,
he supplies it in blinded form.  The Privacy CA signs this blinded key,
the user strips the blinding, and he is left with a cert from the Privacy
CA on his identity key.  He uses this as in the previous example, showing
his privacy cert and using his privacy key.

In this system, the Privacy CA no longer has the power to revoke your
anonymity, because he only saw a blinded version of your identity key.
However, the Privacy CA retains the power to create bogus identities,
so the security risk is still there.  If there has been a global crack,
and a permanent key has been revoked, the Privacy CA can check the
revocation list and prevent that user from acquiring new identities,
so revocation works for global cracks.  However, for local cracks,
where there is suspicious behavior, there is no way to track down the
permanent key associated with the cheater.  All his interactions are
done with an identity key which is unlinkable.  So there is no way to
respond to local cracks and revoke the keys.

Actually, in this system the Privacy CA is not really protecting
anyone's privacy, because it doesn't see any identities.  There is no
need for multiple Privacy CAs and it would make more sense to merge
the Privacy CA and the original CA that issues the permanent certs.
That way there would be only one agency with the power to forge keys,
which would improve accountability and auditability.

One problem with revocation in both of these systems, especially the one
with Chaum blinding, is that existing identity certs (from before the
fraud was detected) may still be usable.  It is probably necessary to
have identity certs be valid for only a limited time so that users with
revoked keys are not able to continue to use their old identity certs.

Brands credentials provide a more flexible and powerful approach than
Chaum blinding which can potentially provide improvements.  The basic
setup is the same: users would go to a Privacy CA and show their
permanent cert, getting a new cert on an identity key which they would
use on the net.  The difference is that 

Re: Overcoming the potential downside of TCPA

2002-08-15 Thread AARG!Anonymous

Joe Ashwood writes:

 Actually that does nothing to stop it. Because of the construction of TCPA,
 the private keys are registered _after_ the owner receives the computer,
 this is the window of opportunity against that as well.

Actually, this is not true for the endoresement key, PUBEK/PRIVEK, which
is the main TPM key, the one which gets certified by the TPM Entity.
That key is generated only once on a TPM, before ownership, and must
exist before anyone can take ownership.  For reference, see section 9.2,
The first call to TPM_CreateEndorsementKeyPair generates the endorsement
key pair. After a successful completion of TPM_CreateEndorsementKeyPair
all subsequent calls return TCPA_FAIL.  Also section 9.2.1 shows that
no ownership proof is necessary for this step, which is because there is
no owner at that time.  Then look at section 5.11.1, on taking ownership:
user must encrypt the values using the PUBEK.  So the PUBEK must exist
before anyone can take ownership.

 The worst case for
 cost of this is to purchase an additional motherboard (IIRC Fry's has them
 as low as $50), giving the ability to present a purchase. The
 virtual-private key is then created, and registered using the credentials
 borrowed from the second motherboard. Since TCPA doesn't allow for direct
 remote queries against the hardware, the virtual system will actually have
 first shot at the incoming data. That's the worst case.

I don't quite follow what you are proposing here, but by the time you
purchase a board with a TPM chip on it, it will have already generated
its PUBEK and had it certified.  So you should not be able to transfer
a credential of this type from one board to another one.

 The expected case;
 you pay a small registration fee claiming that you accidentally wiped your
 TCPA. The best case, you claim you accidentally wiped your TCPA, they
 charge you nothing to remove the record of your old TCPA, and replace it
 with your new (virtualized) TCPA. So at worst this will cost $50. Once
 you've got a virtual setup, that virtual setup (with all its associated
 purchased rights) can be replicated across an unlimited number of computers.
 
 The important part for this, is that TCPA has no key until it has an owner,
 and the owner can wipe the TCPA at any time. From what I can tell this was
 designed for resale of components, but is perfectly suitable as a point of
 attack.

Actually I don't see a function that will let the owner wipe the PUBEK.
He can wipe the rest of the TPM but that field appears to be set once,
retained forever.

For example, section 8.10: Clear is the process of returning the TPM to
factory defaults.  But a couple of paragraphs later: All TPM volatile
and non-volatile data is set to default value except the endorsement
key pair.

So I don't think your fraud will work.  Users will not wipe their
endorsement keys, accidentally or otherwise.  If a chip is badly enough
damaged that the PUBEK is lost, you will need a hardware replacement,
as I read the spec.

Keep in mind that I only started learning this stuff a few weeks ago,
so I am not an expert, but this is how it looks to me.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Palladium: technical limits and implications

2002-08-12 Thread AARG!Anonymous

Adam Back writes:
 +---++  
 | trusted-agent | user mode  |  
 |space  | app space  |  
 |(code  ++  
 | compartment)  | supervisor |  
 |   | mode / OS  |  
 +---++
 | ring -1 / TOR  |
 ++  
 | hardware / SCP key manager |
 ++  

I don't think this works.  According to Peter Biddle, the TOR can be
launched even days after the OS boots.  It does not underly the ordinary
user mode apps and the supervisor mode system call handlers and device
drivers.

+---++  
| trusted-agent | user mode  |  
|space  | app space  |  
|(code  ++  
| compartment)  | supervisor |  
|   | mode / OS  |  
+---+   +---++
|SCP|---| ring -1 / TOR |
+---+   +---+



This is more how I would see it.  The SCP is more like a peripheral
device, a crypto co-processor, that is managed by the TOR.  Earlier you
quoted Seth's blog:

| The nub is a kind of trusted memory manager, which runs with more
| privilege than an operating system kernel. The nub also manages access
| to the SCP.

as justification for putting the nub (TOR) under the OS.  But I think in
this context more privilege could just refer to the fact that it is in
the secure memory, which is only accessed by this ring--1 or ring-0 or
whatever you want to call it.  It doesn't follow that the nub has anything
to do with the OS proper.  If the OS can run fine without it, as I think
you agreed, then why would the entire architecture have to reorient itself
once the TOR is launched? 

In other words, isn't my version simpler, as it adjoins the column at
the left to the pre-existing column at the right, when the TOR launches,
days after boot?  Doesn't it require less instantaneous, on-the-fly,
reconfiguration of the entire structure of the Windows OS at the moment
of TOR launch?  And what, if anything, does my version fail to accomplish
that we know that Palladium can do?


 Integrity Metrics in a given level are computed by the level below.

 The TOR starts Trusted Agents, the Trusted Agents are outside the OS
 control.  Therefore a remote application based on remote attestation
 can know about the integrity of the trusted-agent, and TOR.

 ring -1/TOR is computed by SCP/hardware; Trusted Agent is computed by
 TOR;

I had thought the hardware might also produce the metrics for trusted
agents, but you could be right that it is the TOR which does so.
That would be consistent with the incremental extension of trust
philosophy which many of these systems seem to follow.

 The parallel stack to the right: OS is computed by TOR; Application is
 computed OS.

No, that doesn't make sense.  Why would the TOR need to compute a metric
of the OS?  Peter has said that Palladium does not give information about
other apps running on your machine:

: Note that in Pd no one but the user can find out the totality of what SW is
: running except for the nub (aka TOR, or trusted operating root) and any
: required trusted services. So a service could say I will only communicate
: with this app and it will know that the app is what it says it is and
: hasn't been perverted. The service cannot say I won't communicate with this
: app if this other app is running because it has no way of knowing for sure
: if the other app isn't running.


 So for general applications you still have to trust the OS, but the OS
 could itself have it's integrity measured by the TOR.  Of course given
 the rate of OS exploits especially in Microsoft products, it seems
 likley that the aspect of the OS that checks integrity of loaded
 applications could itself be tampered with using a remote exploit.

Nothing Peter or anyone else has said indicates that this is a property of
Palladium, as far as I can remember.

 Probably the latter problem is the reason Microsoft introduced ring -1
 in palladium (it seems to be missing in TCPA).

No, I think it is there to prevent debuggers and supervisor-mode drivers
from manipulating secure code.  TCPA is more of a whole-machine spec
dealing with booting an OS, so it doesn't have to deal with the question
of running secure code next to insecure code.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: responding to claims about TCPA

2002-08-12 Thread AARG!Anonymous

David Wagner wrote:
 To respond to your remark about bias: No, bringing up Document Revocation
 Lists has nothing to do with bias.  It is only right to seek to understand
 the risks in advance.  I don't understand why you seem to insinuate
 that bringing up the topic of Document Revocation Lists is an indication
 of bias.  I sincerely hope that I misunderstood you.

I believe you did, because if you look at what I actually wrote, I did not
say that bringing up the topic of DRLs is an indication of bias:

 The association of TCPA with SNRLs is a perfect example of the bias and
 sensationalism which has surrounded the critical appraisals of TCPA.
 I fully support John's call for a fair and accurate evaluation of this
 technology by security professionals.  But IMO people like Ross Anderson
 and Lucky Green have disqualified themselves by virtue of their wild and
 inaccurate public claims.  Anyone who says that TCPA has SNRLs is making
 a political statement, not a technical one.

My core claim is the last sentence.  It's one thing to say, as you
are, that TCPA could make applications implement SNRLs more securely.
I believe that is true, and if this statement is presented in the context
of dangers of TCPA or something similar, it would be appropriate.
But even then, for a fair analysis, it should make clear that SNRLs can
be done without TCPA, and it should go into some detail about just how
much more effective a SNRL system would be with TCPA.  (I will write more
about this in responding to Joseph Ashwood.)

And to be truly unbiased, it should also talk about good uses of TCPA.

If you look at Ross Anderson's TCPA FAQ at
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html, he writes (question 4):

: When you boot up your PC, Fritz takes charge. He checks that the boot
: ROM is as expected, executes it, measures the state of the machine;
: then checks the first part of the operating system, loads and executes
: it, checks the state of the machine; and so on. The trust boundary, of
: hardware and software considered to be known and verified, is steadily
: expanded. A table is maintained of the hardware (audio card, video card
: etc) and the software (O/S, drivers, etc); Fritz checks that the hardware
: components are on the TCPA approved list, that the software components
: have been signed, and that none of them has a serial number that has
: been revoked.

He is not saying that TCPA could make SNRLs more effective.  He says
that Fritz checks... that none of [the software components] has a
serial number that has been revoked.  He is flatly stating that the
TPM chip checks a serial number revocation list.  That is both biased
and factually untrue.

Ross's whole FAQ is incredibly biased against TCPA.  I don't see how
anyone can fail to see that.  If it were titled FAQ about Dangers of
TCPA at least people would be warned that they were getting a one-sided
presentation.  But it is positively shameful for a respected security
researcher like Ross Anderson to pretend that this document is giving
an unbiased and fair description.

I would be grateful if someone who disagrees with me, who thinks that
Ross's FAQ is fair and even-handed, would speak up.  It amazes me that
people can see things so differently.

And Lucky's slide presentation, http://www.cypherpunks.to, is if anything
even worse.  I already wrote about this in detail so I won't belabor
the point.  Again, I would be very curious to hear from someone who
thinks that his presentation was unbiased.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: responding to claims about TCPA

2002-08-10 Thread AARG!Anonymous

AARG! wrote:
 I asked Eric Murray, who knows something about TCPA, what he thought
 of some of the more ridiculous claims in Ross Anderson's FAQ (like the
 SNRL), and he didn't respond.  I believe it is because he is unwilling
 to publicly take a position in opposition to such a famous and respected
 figure.

John Gilmore replied:

 Many of the people who know something about TCPA are constrained
 by NDA's with Intel.  Perhaps that is Eric's problem -- I don't know.

Maybe, but he could reply just based on public information.  Despite this
he was unable or unwilling to challenge Ross Anderson.


 One of the things I told them years ago was that they should draw
 clean lines between things that are designed to protect YOU, the
 computer owner, from third parties; versus things that are designed to
 protect THIRD PARTIES from you, the computer owner.  This is so
 consumers can accept the first category and reject the second, which,
 if well-informed, they will do.

I don't agree with this distinction.  If I use a smart card chip that
has a private key on it that won't come off, is that protecting me from
third parties, or vice versa?  If I run a TCPA-enhanced Gnutella that
keeps the RIAA from participating and easily finding out who is running
supernodes (see http://slashdot.org/article.pl?sid=02/08/09/2347245 for
the latest crackdown), I benefit, even though the system technically is
protecting the data from me.

I wrote earlier that if people were honest, trusted computing would not
be necessary, because they would keep their promises.  Trusted computing
allows people to prove to remote users that they will behave honestly.
How does that fit into your dichotomy?  Society has evolved a myriad
mechanisms to allow people to give strong evidence that they will keep
their word; without them, trade and commerce would be impossible.  By your
logic, these protect third parties from you, and hence should be rejected.
You would discard the economic foundation for our entire world.


 TCPA began in that protect third parties from the owner category,
 and is apparently still there today.  You won't find that out by
 reading Intel's modern public literature on TCPA, though; it doesn't
 admit to being designed for, or even useful for, DRM.  My guess is
 that they took my suggestion as marketing advice rather than as a
 design separation issue.  Pitch all your protect-third-party products
 as if they are protect-the-owner products was the opposite of what I
 suggested, but it's the course they (and the rest of the DRM industry)
 are on.  E.g. see the July 2002 TCPA faq at:

   http://www.trustedcomputing.org/docs/TPM_QA_071802.pdf

   3. Is the real goal of TCPA to design a TPM to act as a DRM or
  Content Protection device? 
   No.  The TCPA wants to increase the trust ... [blah blah blah]

 I believe that No is a direct lie.

David Grawrock of Intel has an interesting slide presentation on
TCPA at http://www.intel.com/design/security/tcpa/slides/index.htm.
His slide 3 makes a good point: All 5 members had very different ideas
of what should and should not be added.  It's possible that some of
the differences in perspective and direction on TCPA are due to the
several participants wanting to move in different ways.  Some may have
been strictly focused on DRM; others may have had a more expansive
vision of how trust can benefit all kinds of distributed applications.
So it's not clear that you can speak of the real goal of TCPA, when
there are all these different groups with different ideas.

 Intel has removed the first
 public version 0.90 of the TCPA spec from their web site, but I have
 copies, and many of the examples in the mention DRM, e.g.:

   http://www.trustedcomputing.org/docs/TCPA_first_WP.pdf  (still there)

 This TCPA white paper says that the goal is ubiquity.  Another way to
 say that is monopoly.

Nonsense.  The web is ubiquitous, but is not a monopoly.

 The idea is to force any other choices out of
 the market, except the ones that the movie  record companies want.
 The first scenario (PDF page 7) states: For example, before making
 content available to a subscriber, it is likely that a service
 provider will need to know that the remote platform is trustworthy.

That same language is in the Credible Interoperability document presently
on the web site at
http://www.trustedcomputing.org/docs/Credible_Interoperability_020702.pdf.
So I don't think there is necessarily any kind of a cover-up here.


   http://www.trustedpc.org/home/pdf/spec0818.pdf (gone now)

 Even this 200-page TCPA-0.90 specification, which is carefully written
 to be obfuscatory and misleading, leaks such gems as: These features
 encourage third parties to grant access to by the platform to
 information that would otherwise be denied to the platform (page 14).
 The 'protected store' feature...can hold and manipulate confidential
 data, and will allow the release or use of that data only in the
 presence of a particular 

Seth on TCPA at Defcon/Usenix

2002-08-10 Thread AARG!Anonymous

Seth Schoen of the EFF has a good blog entry about Palladium and TCPA
at http://vitanuova.loyalty.org/2002-08-09.html.  He attended Lucky's
presentation at DEF CON and also sat on the TCPA/Palladium panel at
the USENIX Security Symposium.

Seth has a very balanced perspective on these issues compared to most
people in the community.  It makes me proud to be an EFF supporter
(in fact I happen to be wearing my EFF T-shirt right now).

His description of how the Document Revocation List could work is
interesting as well.  Basically you would have to connect to a server
every time you wanted to read a document, in order to download a key
to unlock it.  Then if someone decided that the document needed
to un-exist, they would arrange for the server no longer to download
that key, and the document would effectively be deleted, everywhere.

I think this clearly would not be a feature that most people would accept
as an enforced property of their word processor.  You'd be unable to
read things unless you were online, for one thing.  And any document you
were relying on might be yanked away from you with no warning.  Such a
system would be so crippled that if Microsoft really did this for Word,
sales of vi would go through the roof.

It reminds me of an even better way for a word processor company to make
money: just scramble all your documents, then demand ONE MILLION DOLLARS
for the keys to decrypt them.  The money must be sent to a numbered
Swiss account, and the software checks with a server to find out when
the money has arrived.  Some of the proposals for what companies will
do with Palladium seem about as plausible as this one.

Seth draws an analogy with Acrobat, where the paying customers are
actually the publishers, the reader being given away for free.  So Adobe
does have incentives to put in a lot of DRM features that let authors
control publication and distribution.

But he doesn't follow his reasoning to its logical conclusion when dealing
with Microsoft Word.  That program is sold to end users - people who
create their own documents for the use of themselves and their associates.
The paying customers of Microsoft Word are exactly the ones who would
be screwed over royally by Seth's scheme.  So if we follow the money
as Seth in effect recommends, it becomes even more obvious that Microsoft
would never force Word users to be burdened with a DRL feature.

And furthermore, Seth's scheme doesn't rely on TCPA/Palladium.  At the
risk of aiding the fearmongers, I will explain that TCPA technology
actually allows for a much easier implementation, just as it does in so
many other areas.  There is no need for the server to download a key;
it only has to download an updated DRL, and the Word client software
could be trusted to delete anything that was revoked.  But the point
is, Seth's scheme would work just as well today, without TCPA existing.
As I quoted Ross Anderson saying earlier with regard to serial number
revocation lists, these features don't need TCPA technology.

So while I have some quibbles with Seth's analysis, on the whole it is
the most balanced that I have seen from someone who has no connection
with the designers (other than my own writing, of course).  A personal
gripe is that he referred to Lucky's critics, plural, when I feel
all alone out here.  I guess I'll have to start using the royal we.
But he redeemed himself by taking mild exception to Lucky's slide show,
which is a lot farther than anyone else has been willing to go in public.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread AARG!Anonymous

Anon wrote:
 You could even have each participant compile the program himself,
 but still each app can recognize the others on the network and
 cooperate with them.

Matt Crawford replied:
 Unless the application author can predict the exact output of the
 compilers, he can't issue a signature on the object code.  The
 compilers then have to be inside the trusted base, checking a
 signature on the source code and reflecting it somehow through a
 signature they create for the object code.

It's likely that only a limited number of compiler configurations would
be in common use, and signatures on the executables produced by each of
those could be provided.  Then all the app writer has to do is to tell
people, get compiler version so-and-so and compile with that, and your
object will match the hash my app looks for.
DEI

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



[no subject]

2002-08-09 Thread AARG!Anonymous

Adam Back writes a very thorough analysis of possible consequences of the
amazing power of the TCPA/Palladium model.  He is clearly beginning to
get it as far as what this is capable of.  There is far more to this
technology than simple DRM applications.  In fact Adam has a great idea
for how this could finally enable selling idle CPU cycles while protecting
crucial and sensitive business data.  By itself this could be a killer
app for TCPA/Palladium.  And once more people start thinking about how to
exploit the potential, there will be no end to the possible applications.

Of course his analysis is spoiled by an underlying paranoia.  So let me
ask just one question.  How exactly is subversion of the TPM a greater
threat than subversion of your PC hardware today?  How do you know that
Intel or AMD don't already have back doors in their processors that
the NSA and other parties can exploit?  Or that Microsoft doesn't have
similar backdoors in its OS?  And similarly for all the other software
and hardware components that make up a PC today?

In other words, is this really a new threat?  Or are you unfairly blaming
TCPA for a problem which has always existed and always will exist?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: TCPA/Palladium -- likely future implications

2002-08-09 Thread AARG!Anonymous

I want to follow up on Adam's message because, to be honest, I missed
his point before.  I thought he was bringing up the old claim that these
systems would give the TCPA root on your computer.

Instead, Adam is making a new point, which is a good one, but to
understand it you need a true picture of TCPA rather than the false one
which so many cypherpunks have been promoting.  Earlier Adam offered a
proposed definition of TCPA/Palladium's function and purpose:

 Palladium provides an extensible, general purpose programmable
 dongle-like functionality implemented by an ensemble of hardware and
 software which provides functionality which can, and likely will be
 used to expand centralised control points by OS vendors, Content
 Distrbuters and Governments.

IMO this is total bullshit, political rhetoric that is content-free
compared to the one I offered:

: Allow computers separated on the internet to cooperate and share data
: and computations such that no one can get access to the data outside
: the limitations and rules imposed by the applications.

It seems to me that my definition is far more useful and appropriate in
really understanding what TCPA/Palladium are all about.  Adam, what do
you think?

If we stick to my definition, you will come to understand that the purpose
of TCPA is to allow application writers to create closed spheres of trust,
where the application sets the rules for how the data is handled.  It's
not just DRM, it's Napster and banking and a myriad other applications,
each of which can control its own sensitive data such that no one can
break the rules.

At least, that's the theory.  But Adam points out a weak spot.  Ultimately
applications trust each other because they know that the remote systems
can't be virtualized.  The apps are running on real hardware which has
real protections.  But applications know this because the hardware has
a built-in key which carries a certificate from the manufacturer, who
is called the TPME in TCPA.  As the applications all join hands across
the net, each one shows his cert (in effect) and all know that they are
running on legitimate hardware.

So the weak spot is that anyone who has the TPME key can run a virtualized
TCPA, and no one will be the wiser.  With the TPME key they can create
their own certificate that shows that they have legitimate hardware,
when they actually don't.  Ultimately this lets them run a rogue client
that totally cheats, disobeys all the restrictions, shows the user all
of the data which is supposed to be secret, and no one can tell.

Furthermore, if people did somehow become suspicious about one particular
machine, with access to the TPME key the eavesdroppers can just create
a new virtual TPM and start the fraud all over again.

It's analogous to how someone with Verisign's key could masquerade as
any secure web site they wanted.  But it's worse because TCPA is almost
infinitely more powerful than PKI, so there is going to be much more
temptation to use it and to rely on it.

Of course, this will be inherently somewhat self-limiting as people learn
more about it, and realize that the security provided by TCPA/Palladium,
no matter how good the hardware becomes, will always be limited to
the political factors that guard control of the TPME keys.  (I say
keys because likely more than one company will manufacture TPM's.
Also in TCPA there are two other certifiers: one who certifies the
motherboard and computer design, and the other who certifies that the
board was constructed according to the certified design.  The NSA would
probably have to get all 3 keys, but this wouldn't be that much harder
than getting just one.  And if there are multiple manufacturers then
only 1 key from each of the 3 categories is needed.)

To protect against this, Adam offers various solutions.  One is to do
crypto inside the TCPA boundary.  But that's pointless, because if the
crypto worked, you probably wouldn't need TCPA.  Realistically most of the
TCPA applications can't be cryptographically protected.  Computing with
encrypted instances is a fantasy.  That's why we don't have all those
secure applications already.

Another is to use a web of trust to replace or add to the TPME certs.
Here's a hint.  Webs of trust don't work.  Either they require strong
connections, in which case they are too sparse, or they allow weak
connections, in which case they are meaningless and anyone can get in.

I have a couple of suggestions.  One early application for TCPA is in
closed corporate networks.  In that case the company usually buys all
the computers and prepares them before giving them to the employees.
At that time, the company could read out the TPM public key and sign
it with the corporate key.  Then they could use that cert rather than
the TPME cert.  This would protect the company's sensitive data against
eavesdroppers who manage to virtualize their hardware.

For the larger public network, the first thing I would suggest is that
the TPME key ought 

Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread AARG!Anonymous

Re the debate over whether compilers reliably produce identical object
(executable) files:

The measurement and hashing in TCPA/Palladium will probably not be done
on the file itself, but on the executable content that is loaded into
memory.  For Palladium it is just the part of the program called the
trusted agent.  So file headers with dates, compiler version numbers,
etc., will not be part of the data which is hashed.

The only thing that would really break the hash would be changes to the
compiler code generator that cause it to create different executable
output for the same input.  This might happen between versions, but
probably most widely used compilers are relatively stable in that
respect these days.  Specifying the compiler version and build flags
should provide good reliability for having the executable content hash
the same way for everyone.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Privacy-enhancing uses for TCPA

2002-08-03 Thread AARG!Anonymous

Here are some alternative applications for TCPA/Palladium technology which
could actually promote privacy and freedom.  A few caveats, though: they
do depend on a somewhat idealized view of the architecture.  It may be
that real hardware/software implementations are not sufficiently secure
for some of these purposes, but as systems become better integrated
and more technologically sound, this objection may go away.  And these
applications do assume that the architecture is implemented without secret
backdoors or other intentional flaws, which might be guaranteed through
an open design process and manufacturing inspections.  Despite these
limitations, hopefully these ideas will show that TCPA and Palladium
actually have many more uses than the heavy-handed and control-oriented
ones which have been discussed so far.

To recap, there are basically two technologies involved.  One is secure
attestation.  This allows machines to securely receive a hash of the
software which is running remotely.  It is used in these examples to
know that a trusted client program is running on the remote machine.
The other is secure storage.  This allows programs to encrypt data
in such a way that no other program can decrypt it.

In addition, we assume that programs are able to run unmolested;
that is, that other software and even the user cannot peek into the
program's memory and manipulate it or learn its secrets.  Palladium has
a feature called trusted space which is supposed to be some special
memory that is immune from being compromised.  We also assume that
all data sent between computers is encrypted using something like SSL,
with the secret keys being held securely by the client software (hence
unavailable to anyone else, including the users).

The effect of these technologies is that a number of computers across
the net, all running the same client software, can form their own
closed virtual world.  They can exchange and store data of any form,
and no one can get access to it unless the client software permits it.
That means that the user, eavesdroppers, and authorities are unable to
learn the secrets protected by software which uses these TCPA features.
(Note, in the sequel I will just write TCPA when I mean TCPA/Palladium.)

Now for a simple example of what can be done: a distributed poker game.
Of course there are a number of crypto protocols for playing poker on the
net, but they are quite complicated.  Even though they've been around
for almost 20 years, I've never seen game software which uses them.
With TCPA we can do it trivially.

Each person runs the same client software, which fact can be tested
using secure attestation.  The dealer's software randomizes a deck and
passes out the cards to each player.  The cards are just strings like
ace of spades, or perhaps simple numerical equivalents - nothing fancy.
Of course, the dealer's software learns in this way what cards every
player has.  But the dealer himself (i.e. the human player) doesn't
see any of that, he only sees his own hand.  The software keeps the
information secret from the user.  As each person makes his play, his
software sends simple messages telling what cards he is exposing or
discarding, etc.  At the end each person sends messages showing what
his hand is, according to the rules of poker.

This is a trivial program.  You could do it in one or two pages of code.
And yet, given the TCPA assumptions, it is just as secure as a complex
cryptographically protected version would be that takes ten times as
much code.

Of course, without TCPA such a program would never work.  Someone would
write a cheating client which would tell them what everyone else's cards
were when they were the dealer.  There would be no way that people could
trust each other not to do this.  But TCPA lets people prove to each
other that they are running the legitimate client.

So this is a simple example of how the secure attestation features of
TCPA/Palladium can allow a kind of software which would never work today,
software where people trust each other.  Let's look at another example,
a P2P system with anonymity.

Again, there are many cryptographic systems in the literature for
anonymous communication.  But they tend to be complicated and inefficient.
With TCPA we only need to set up a simple flooding broadcast network.
Let each peer connect to a few other peers.  To prevent traffic
analysis, keep each node-to-node link at a constant traffic level using
dummy padding.  (Recall that each link is encrypted using SSL.)

When someone sends data, it gets sent everywhere via a simple routing
strategy.  The software then makes the received message available to the
local user, if he is the recipient.  Possibly the source of the message
is carried along with it, to help with routing; but this information is
never leaked outside the secure communications part of the software,
and never shown to any users.

That's all there is to it.  Just send messages with flood broadcasts,
but 

RE: Challenge to David Wagner on TCPA

2002-08-02 Thread AARG!Anonymous

Peter Trei writes:

 It's rare enough that when a new anononym appears, we know
 that the poster made a considered decision to be anonymous.

 The current poster seems to have parachuted in from nowhere, 
 to argue a specific position on a single topic. It's therefore 
 reasonable  to infer that the nature of that position and topic has 
 some bearing on the decision to be anonymous.


Yes, my name is AARG!.  That was the first thing my mother said after
I was born, and the name stuck.

Not really.  For Peter's information, the name associated with a
message through an anonymous remailer is simply the name of the
last remailer in the chain, whatever that remailer operator chose
to call it.  AARG is a relatively new remailer, but if you look at
http://anon.efga.org/Remailers/TypeIIList you will see that it is very
reliable and fast.  I have been using it as an exit remailer lately
because other ones that I have used often produce inconsistent results.
It has not been unusual to have to send a message two or three times
before it appears.  So far that has not been a problem with this one.

So don't read too much into the fact that a bunch of anonymous postings
have suddenly started appearing from one particular remailer.  For your
information, I have sent over 400 anonymous messages in the past year
to cypherpunks, coderpunks, sci.crypt and the cryptography list (35
of them on TCPA related topics).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Challenge to David Wagner on TCPA

2002-08-02 Thread AARG!Anonymous

Peter Trei envisions data recovery in a TCPA world:

 HoM:  I want to recover my data.
 Me:   OK: We'll pull the HD, and get the data off it.
 HoM:  Good - mount it as a secondary HD in my new system.
 Me:   That isn't going to work now we have TCPA and Palladium.
 HoM:  Well, what do you have to do?
 Me:   Oh, it's simple. We encrypt the data under Intel's TPME key,
  and send it off to Intel. Since Intel has all the keys, they can
  unseal all your data to plaintext, copy it, and then re-seal it for
  your new system. It only costs $1/Mb.
 HoM:  Let me get this straight - the only way to recover this data is
 to let
  Intel have a copy, AND pay them for it?
 Me:   Um... Yes. I think MS might be involved as well, if your were
 using
  Word.
 HoM:  You are *so* dead.

It's not quite as bad as all this, but it is still pretty bad.

You don't have to send your data to Intel, just a master storage key.
This key encrypts the other keys which encrypt your data.  Normally this
master key never leaves your TPM, but there is this optional feature
where it can be backed up, encrypted to the manufacturer's public key,
for recovery purposes.  I think it is also in blinded form.

Obviously you'd need to do this backup step before the TPM crashed;
afterwards is too late.  So maybe when you first get your system it
generates the on-chip storage key (called the SRK, storage root key),
and then exports the recovery blob.  You'd put that on a floppy or some
other removable medium and store it somewhere safe.  Then when your
system dies you pull out the disk and get the recovery blob.

You communicate with the manufacturer, give him this recovery blob, along
with the old TPM key and the key to your new TPM in the new machine.
The manufacturer decrypts the blob and re-encrypts it to the TPM in the
new machine.  It also issues and distributes a CRL revoking the cert on
the old TPM key so that the old machine can't be used to access remote
TCPA data any more.  (Note, the CRL is not used by the TPM itself, it is
just used by remote servers to decide whether to believe client requests.)

The manufacturer sends the data back to you and you load it into the TPM
in your new machine, which decrypts it and stores the master storage key.
Now it can read your old data.

Someone asked if you'd have to go through all this if you just upgraded
your OS.  I'm not sure.  There are several secure registers on the
TPM, called PCRs, which can hash different elements of the BIOS, OS,
and other software.  You can lock a blob to any one of these registers.
So in some circumstances it might be that upgrading the OS would keep the
secure data still available.  In other cases you might have to go through
some kind of recovery procedure.

I think this recovery business is a real Achilles heel of the TCPA
and Palladium proposals.  They are paranoid about leaking sealed data,
because the whole point is to protect it.  So they can't let you freely
copy it to new machines, or decrypt it from an insecure OS.  This anal
protectiveness is inconsistent with the flexibility needed in an imperfect
world where stuff breaks.

My conclusion is that the sealed storage of TCPA will be used sparingly.
Ross Anderson and others suggest that Microsoft Word will seal all of
its documents so that people can't switch to StarOffice.  I think that
approach would be far too costly and risky, given the realities I have
explained above.  Instead, I would expect that only highly secure data
would be sealed, and that there would often be some mechanism to recover
it from elsewhere.  For example, in a DRM environment, maybe the central
server has a record of all the songs you have downloaded.  Then if your
system crashes, rather than go through a complicated crypto protocol to
recover, you just buy a new machine, go to the server, and re-download
all the songs you were entitled to.

Or in a closed environment, like a business which seals sensitive
documents, the data could be backed up redundantly to multiple central
file servers, each of which seal it.  Then if one machine crashes,
the data is available from others and there is no need to go through
the recovery protocol.

So there are solutions, but they will add complexity and cost.  At the
same time they do add genuine security and value.  Each application and
market will have to find its own balance of the costs and benefits.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Challenge to David Wagner on TCPA

2002-08-01 Thread AARG!Anonymous

Eric Murray writes:
 TCPA (when it isn't turned off) WILL restrict the software that you
 can run.  Software that has an invalid or missing signature won't be
 able to access sensitive data[1].   Meaning that unapproved software
 won't work.

 [1] TCPAmain_20v1_1a.pdf, section 2.2

We need to look at the text of this in more detail.  This is from
version 1.1b of the spec:

: This section introduces the architectural aspects of a Trusted Platform
: that enable the collection and reporting of integrity metrics.
:
: Among other things, a Trusted Platform enables an entity to determine
: the state of the software environment in that platform and to SEAL data
: to a particular software environment in that platform.
:
: The entity deduces whether the state of the computing environment in
: that platform is acceptable and performs some transaction with that
: platform. If that transaction involves sensitive data that must be
: stored on the platform, the entity can ensure that that data is held in
: a confidential format unless the state of the computing environment in
: that platform is acceptable to the entity.
:
: To enable this, a Trusted Platform provides information to enable the
: entity to deduce the software environment in a Trusted Platform. That
: information is reliably measured and reported to the entity. At the same
: time, a Trusted Platform provides a means to encrypt cryptographic keys
: and to state the software environment that must be in place before the
: keys can be decrypted.

What this means is that a remote system can query the local TPM and
find out what software has been loaded, in order to decide whether to
send it some data.  It's not that unapproved software won't work,
it's that the remote guy can decide whether to trust it.

Also, as stated earlier, data can be sealed such that it can only be
unsealed when the same environment is booted.  This is the part above
about encrypting cryptographic keys and making sure the right software
environment is in place when they are decrypted.

 Ok, technically it will run but can't access the data,
 but that it a very fine hair to split, and depending on the nature of
 the data that it can't access, it may not be able to run in truth.

 If TCPA allows all software to run, it defeats its purpose.
 Therefore Wagner's statement is logically correct.

But no, the TCPA does allow all software to run.  Just because a remote
system can decide whether to send it some data doesn't mean that software
can't run.  And just because some data may be inaccessible because it
was sealed when another OS was booted, also doesnt mean that software
can't run.

I think we agree on the facts, here.  All software can run, but the TCPA
allows software to prove its hash to remote parties, and to encrypt data
such that it can't be decrypted by other software.  Would you agree that
this is an accurate summary of the functionality, and not misleading?

If so, I don't see how you can get from this to saying that some software
won't run.  You might as well say that encryption means that software
can't run, because if I encrypt my files then some other programs may
not be able to read them.

Most people, as you may have seen, interpret this part about software
can't run much more literally.  They think it means that software needs
a signature in order to be loaded and run.  I have been going over and
over this on sci.crypt.  IMO the facts as stated two paragraphs up are
completely different from such a model.

 Yes, the spec says that it can be turned off.  At that point you
 can run anything that doesn't need any of the protected data or
 other TCPA services.   But, why would a software vendor that wants
 the protection that TCPA provides allow his software to run
 without TCPA as well, abandoning those protections?

That's true; in fact if you ran it earlier under TCPA and sealed some
data, you will have to run under TCPA to unseal it later.  The question
is whether the advantages of running under TCPA (potentially greater
security) outweigh the disadvantages (greater potential for loss of
data, less flexibility, etc.).

 I doubt many would do so, the majority of TCPA-enabled
 software will be TCPA-only.  Perhaps not at first, but eventually
 when there are enough TCPA machines out there.  More likely, spiffy
 new content and features will be enabled if one has TCPA and is
 properly authenticated, disabled otherwise.  But as we have seen
 time after time, today's spiffy new content is tomorrows
 virtual standard.

Right, the strongest case will probably be for DRM.  You might be able
to download all kinds of content if you are running an OS and application
that the server (content provider) trusts.  People will have a choice of
using TCPA and getting this data legally, or avoiding TCPA and trying to
find pirated copies as they do today.

 This will require the majority of people to run with TCPA turned on
 if they want the content.  TCPA doesn't need to be required by law,