Re: Microsoft: Palladium will not limit what you can run

2003-03-15 Thread Anonymous
Eugen Leitl writes:

 Unfortunately no one can accept in good faith a single word coming out of
 Redmond. Biddle has been denying Pd can be used for DRM in presentation
 (xref Lucky Green subsequent patent claims to call the bluff), however in
 recent (of this week) Focus interview Gates explicitly stated it does.  

I don't know what Gates said in this Focus interview but you have
misstated the history here.  Microsoft has never denied that Palladium can
be used for DRM.  Rather, the issue with regard to Lucky Green's supposed
patent application (whatever happened to that, anyway?) was whether
Palladium would be used for software copy protection.  Microsoft said
that they couldn't think of any way to use it for that purpose.  See
http://www.mail-archive.com/[EMAIL PROTECTED]/msg02554.html.

 Let's see, we have an ubiquitous built-in DRM infrastructure, developed
 under great expense and deployed under costs in an industry turning over
 every cent twice, and no-one is going to use it (Palladium will limit
 what programs people can run)?

Microsoft's point with regard to DRM has always been that Palladium had
other uses besides that one which everyone was focused on.  Obviously they
fully expect people to use the technology.

I'm not sure where you get the part about it being deployed under costs.
Is this more of the XBox analogy?  That's a video game system, where
the economics are totally dissimilar to commodity PC's.  All video game
consoles are sold under cost today.  PCs generally are not.  This is a
misleading analogy.

In any case, DRM does not limit what programs people can run, at least
not to a greater degree than does any program which encrypts its data.

 Right. It's all completely voluntary. There will be no attempts whatsoever 
 to lock-in, despite decades of attempts and considerable economic 
 interests involved. 

Yes, it is completely voluntary, and we should all remain vigilant to
make sure it stays that way.  And no doubt there will be efforts to
lock-in customers, just as there have been in the past.  There is no
contradiction between these two points.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Quantum computers inch closer?

2002-08-31 Thread AARG!Anonymous

Bear writes:
 In this case you'd need to set up the wires-and-gates model
 in the QC for two ciphertext blocks, each attached to an
 identical plaintext-recognizer function and attached to the
 same key register.  Then you set up the entangled state,
 and collapse the eigenvector on the eigenstate where the
 ciphertext for block A and block B is produced, and the
 plaintext recognizer for both block A and block B return
 1, and then you'd read the plaintext and key out of the
 appropriate locations (dots?) in the qchip.

The problem is that you can't forcibly collapse the state vector into your
wished-for eigenstate, the one where the plaintext recognizer returns a 1.
Instead, it will collapse into a random state, associated with a random
key, and it is overwhelmingly likely that this key is one for which the
recognizer returns 0.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cryptographic privacy protection in TCPA

2002-08-17 Thread AARG!Anonymous

Dr. Mike wrote, patiently, persistently and truthfully:

 On Fri, 16 Aug 2002, AARG! Anonymous wrote:

  Here are some more thoughts on how cryptography could be used to
  enhance user privacy in a system like TCPA.  Even if the TCPA group
  is not receptive to these proposals, it would be useful to have an
  understanding of the security issues.  And the same issues arise in
  many other kinds of systems which use certificates with some degree
  of anonymity, so the discussion is relevant even beyond TCPA.

 OK, I'm going to discuss it from a philosophical perspective.
 i.e. I'm just having fun with this.

Fine, but let me put this into perspective.  First, although the
discussion is in terms of a centralized issuer, the same issues arise if
there are multiple issuers, even in a web-of-trust situation.  So don't
get fixated on the fact that my analysis assumed a single issuer -
that was just for simplicity in what was already a very long message.

The abstract problem to be solved is this: given that there is some
property which is being asserted via cryptographic certificates
(credentials), we want to be able to show possession of that property
in an anonymous way.  In TCPA the property is being a valid TPM.
Another example would be a credit rating agency who can give out a good
credit risk credential.  You want to be able to show it anonymously in
some cases.  Yet another case would be a state drivers license agency
which gives out an over age 21 credential, again where you want to be
able to show it anonymously.

This is actually one of the oldest problems which proponents of
cryptographic anonymity attempted to address, going back to David Chaum's
seminal work.  TCPA could represent the first wide-scale example of
cryptographic credentials being shown anonymously.  That in itself ought
to be of interest to cypherpunks.  Unfortunately TCPA is not going for
full cryptographic protection of anonymity, but relying on Trusted Third
Parties in the form of Privacy CAs.  My analysis suggests that although
there are a number of solutions in the cryptographic literature, none of
them are ideal in this case.  Unless we can come up with a really strong
solution that satisfies all the security properties, it is going to be
hard to make a case that the use of TTPs is a mistake.


 I don't like the idea that users *must* have a certificate.  Why
 can't each person develop their own personal levels of trust and
 associate it with their own public key?  Using multiple channels,
 people can prove their key is their word.  If any company wants to
 associate a certificate with a customer, that can have lots of meanings
 to lots of other people.  I don't see the usefullness of a permanent
 certificate.  Human interaction over electronic media has to deal
 with monkeys, because that's what humans are :-)

A certificate is a standardized and unforgeable statement that some
person or key has a particular property, that's all.  The kind of system
you are talking about, of personal knowledge and trust, can't really be
generalized to an international economy.


  Actually, in this system the Privacy CA is not really protecting
  anyone's privacy, because it doesn't see any identities.  There is no
  need for multiple Privacy CAs and it would make more sense to merge
  the Privacy CA and the original CA that issues the permanent certs.
  That way there would be only one agency with the power to forge keys,
  which would improve accountability and auditability.

 I really, REALLY, *REALLY*, don't like the idea of one entity having
 the ability to create or destroy any persons ability to use their
 computer at whim.  You are suggesting that one person (or small group)
 has the power to create (or not) and revoke (or not!) any and all TPM's!

 I don't know how to describe my astoundment at the lack of comprehension
 of history.

Whoever makes a statement about a property should have the power to
revoke it.  I am astounded that you think this is a radical notion.

If one or a few entities become widely trusted to make and revoke
statements that people care about, it is because they have earned that
trust.  If the NY Times says something is true, people tend to believe it.

If Intel says that such-and-such a key is in a valid TPM, people may
choose to believe this based on Intel's reputation.  If Intel later
determines that the key has been published on the net and so can no
longer be presumed to be a TPM key, it revokes its statement.

This does not mean that Intel would destroy any person's ability to use
their computer on a whim.  First, having the TPM cert revoked would not
destroy your ability to use your computer; at worst you could no longer
persuade other people of your trustworthiness.  And second, Intel would
not make these kind of decision on a whim, any more than the NY Times
would publish libelous articles on a whim; doing so would risk destroying
the company's reputation, one of its most valuable assets.

I can't

Cryptographic privacy protection in TCPA

2002-08-16 Thread AARG!Anonymous

Here are some more thoughts on how cryptography could be used to
enhance user privacy in a system like TCPA.  Even if the TCPA group
is not receptive to these proposals, it would be useful to have an
understanding of the security issues.  And the same issues arise in
many other kinds of systems which use certificates with some degree
of anonymity, so the discussion is relevant even beyond TCPA.

The basic requirement is that users have a certificate on a long-term key
which proves they are part of the system, but they don't want to show that
cert or that key for most of their interactions, due to privacy concerns.
They want to have their identity protected, while still being able to
prove that they do have the appropriate cert.  In the case of TCPA the
key is locked into the TPM chip, the endorsement key; and the cert
is called the endorsement certificate, expected to be issued by the
chip manufacturer.  Let us call the originating cert issuer the CA in
this document, and the long-term cert the permanent certificate.

A secondary requirement is for some kind of revocation in the case
of misuse.  For TCPA this would mean cracking the TPM and extracting
its key.  I can see two situations where this might lead to revocation.
The first is a global crack, where the extracted TPM key is published
on the net, so that everyone can falsely claim to be part of the TCPA
system.  That's a pretty obvious case where the key must be revoked for
the system to have any integrity at all.  The second case is a local
crack, where a user has extracted his TPM key but keeps it secret, using
it to cheat the TCPA protocols.  This would be much harder to detect,
and perhaps equally significantly, much harder to prove.  Nevertheless,
some way of responding to this situation is a desirable security feature.

The TCPA solution is to use one or more Privacy CAs.  You supply your
permanent cert and a new short-term identity key; the Privacy CA
validates the cert and then signs your key, giving you a new cert on the
identity key.  For routine use on the net, you show your identity cert
and use your identity key; your permanent key and cert are never shown
except to the Privacy CA.

This means that the Privacy CA has the power to revoke your anonymity;
and worse, he (or more precisely, his key) has the power to create bogus
identities.  On the plus side, the Privacy CA can check a revocation list
and not issue a new identity cert of the permanent key has been revoked.
And if someone has done a local crack and the evidence is strong enough,
the Privacy CA can revoke his anonymity and allow his permanent key to
be revoked.

Let us now consider some cryptographic alternatives.  The first is to
use Chaum blinding for the Privacy CA interaction.  As before, the user
supplies his permanent cert to prove that he is a legitimate part of
the system, but instead of providing an identity key to be certified,
he supplies it in blinded form.  The Privacy CA signs this blinded key,
the user strips the blinding, and he is left with a cert from the Privacy
CA on his identity key.  He uses this as in the previous example, showing
his privacy cert and using his privacy key.

In this system, the Privacy CA no longer has the power to revoke your
anonymity, because he only saw a blinded version of your identity key.
However, the Privacy CA retains the power to create bogus identities,
so the security risk is still there.  If there has been a global crack,
and a permanent key has been revoked, the Privacy CA can check the
revocation list and prevent that user from acquiring new identities,
so revocation works for global cracks.  However, for local cracks,
where there is suspicious behavior, there is no way to track down the
permanent key associated with the cheater.  All his interactions are
done with an identity key which is unlinkable.  So there is no way to
respond to local cracks and revoke the keys.

Actually, in this system the Privacy CA is not really protecting
anyone's privacy, because it doesn't see any identities.  There is no
need for multiple Privacy CAs and it would make more sense to merge
the Privacy CA and the original CA that issues the permanent certs.
That way there would be only one agency with the power to forge keys,
which would improve accountability and auditability.

One problem with revocation in both of these systems, especially the one
with Chaum blinding, is that existing identity certs (from before the
fraud was detected) may still be usable.  It is probably necessary to
have identity certs be valid for only a limited time so that users with
revoked keys are not able to continue to use their old identity certs.

Brands credentials provide a more flexible and powerful approach than
Chaum blinding which can potentially provide improvements.  The basic
setup is the same: users would go to a Privacy CA and show their
permanent cert, getting a new cert on an identity key which they would
use on the net.  The difference is that 

Re: Overcoming the potential downside of TCPA

2002-08-15 Thread AARG!Anonymous

Joe Ashwood writes:

 Actually that does nothing to stop it. Because of the construction of TCPA,
 the private keys are registered _after_ the owner receives the computer,
 this is the window of opportunity against that as well.

Actually, this is not true for the endoresement key, PUBEK/PRIVEK, which
is the main TPM key, the one which gets certified by the TPM Entity.
That key is generated only once on a TPM, before ownership, and must
exist before anyone can take ownership.  For reference, see section 9.2,
The first call to TPM_CreateEndorsementKeyPair generates the endorsement
key pair. After a successful completion of TPM_CreateEndorsementKeyPair
all subsequent calls return TCPA_FAIL.  Also section 9.2.1 shows that
no ownership proof is necessary for this step, which is because there is
no owner at that time.  Then look at section 5.11.1, on taking ownership:
user must encrypt the values using the PUBEK.  So the PUBEK must exist
before anyone can take ownership.

 The worst case for
 cost of this is to purchase an additional motherboard (IIRC Fry's has them
 as low as $50), giving the ability to present a purchase. The
 virtual-private key is then created, and registered using the credentials
 borrowed from the second motherboard. Since TCPA doesn't allow for direct
 remote queries against the hardware, the virtual system will actually have
 first shot at the incoming data. That's the worst case.

I don't quite follow what you are proposing here, but by the time you
purchase a board with a TPM chip on it, it will have already generated
its PUBEK and had it certified.  So you should not be able to transfer
a credential of this type from one board to another one.

 The expected case;
 you pay a small registration fee claiming that you accidentally wiped your
 TCPA. The best case, you claim you accidentally wiped your TCPA, they
 charge you nothing to remove the record of your old TCPA, and replace it
 with your new (virtualized) TCPA. So at worst this will cost $50. Once
 you've got a virtual setup, that virtual setup (with all its associated
 purchased rights) can be replicated across an unlimited number of computers.
 
 The important part for this, is that TCPA has no key until it has an owner,
 and the owner can wipe the TCPA at any time. From what I can tell this was
 designed for resale of components, but is perfectly suitable as a point of
 attack.

Actually I don't see a function that will let the owner wipe the PUBEK.
He can wipe the rest of the TPM but that field appears to be set once,
retained forever.

For example, section 8.10: Clear is the process of returning the TPM to
factory defaults.  But a couple of paragraphs later: All TPM volatile
and non-volatile data is set to default value except the endorsement
key pair.

So I don't think your fraud will work.  Users will not wipe their
endorsement keys, accidentally or otherwise.  If a chip is badly enough
damaged that the PUBEK is lost, you will need a hardware replacement,
as I read the spec.

Keep in mind that I only started learning this stuff a few weeks ago,
so I am not an expert, but this is how it looks to me.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Palladium: technical limits and implications

2002-08-12 Thread AARG!Anonymous

Adam Back writes:
 +---++  
 | trusted-agent | user mode  |  
 |space  | app space  |  
 |(code  ++  
 | compartment)  | supervisor |  
 |   | mode / OS  |  
 +---++
 | ring -1 / TOR  |
 ++  
 | hardware / SCP key manager |
 ++  

I don't think this works.  According to Peter Biddle, the TOR can be
launched even days after the OS boots.  It does not underly the ordinary
user mode apps and the supervisor mode system call handlers and device
drivers.

+---++  
| trusted-agent | user mode  |  
|space  | app space  |  
|(code  ++  
| compartment)  | supervisor |  
|   | mode / OS  |  
+---+   +---++
|SCP|---| ring -1 / TOR |
+---+   +---+



This is more how I would see it.  The SCP is more like a peripheral
device, a crypto co-processor, that is managed by the TOR.  Earlier you
quoted Seth's blog:

| The nub is a kind of trusted memory manager, which runs with more
| privilege than an operating system kernel. The nub also manages access
| to the SCP.

as justification for putting the nub (TOR) under the OS.  But I think in
this context more privilege could just refer to the fact that it is in
the secure memory, which is only accessed by this ring--1 or ring-0 or
whatever you want to call it.  It doesn't follow that the nub has anything
to do with the OS proper.  If the OS can run fine without it, as I think
you agreed, then why would the entire architecture have to reorient itself
once the TOR is launched? 

In other words, isn't my version simpler, as it adjoins the column at
the left to the pre-existing column at the right, when the TOR launches,
days after boot?  Doesn't it require less instantaneous, on-the-fly,
reconfiguration of the entire structure of the Windows OS at the moment
of TOR launch?  And what, if anything, does my version fail to accomplish
that we know that Palladium can do?


 Integrity Metrics in a given level are computed by the level below.

 The TOR starts Trusted Agents, the Trusted Agents are outside the OS
 control.  Therefore a remote application based on remote attestation
 can know about the integrity of the trusted-agent, and TOR.

 ring -1/TOR is computed by SCP/hardware; Trusted Agent is computed by
 TOR;

I had thought the hardware might also produce the metrics for trusted
agents, but you could be right that it is the TOR which does so.
That would be consistent with the incremental extension of trust
philosophy which many of these systems seem to follow.

 The parallel stack to the right: OS is computed by TOR; Application is
 computed OS.

No, that doesn't make sense.  Why would the TOR need to compute a metric
of the OS?  Peter has said that Palladium does not give information about
other apps running on your machine:

: Note that in Pd no one but the user can find out the totality of what SW is
: running except for the nub (aka TOR, or trusted operating root) and any
: required trusted services. So a service could say I will only communicate
: with this app and it will know that the app is what it says it is and
: hasn't been perverted. The service cannot say I won't communicate with this
: app if this other app is running because it has no way of knowing for sure
: if the other app isn't running.


 So for general applications you still have to trust the OS, but the OS
 could itself have it's integrity measured by the TOR.  Of course given
 the rate of OS exploits especially in Microsoft products, it seems
 likley that the aspect of the OS that checks integrity of loaded
 applications could itself be tampered with using a remote exploit.

Nothing Peter or anyone else has said indicates that this is a property of
Palladium, as far as I can remember.

 Probably the latter problem is the reason Microsoft introduced ring -1
 in palladium (it seems to be missing in TCPA).

No, I think it is there to prevent debuggers and supervisor-mode drivers
from manipulating secure code.  TCPA is more of a whole-machine spec
dealing with booting an OS, so it doesn't have to deal with the question
of running secure code next to insecure code.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: responding to claims about TCPA

2002-08-12 Thread AARG!Anonymous

David Wagner wrote:
 To respond to your remark about bias: No, bringing up Document Revocation
 Lists has nothing to do with bias.  It is only right to seek to understand
 the risks in advance.  I don't understand why you seem to insinuate
 that bringing up the topic of Document Revocation Lists is an indication
 of bias.  I sincerely hope that I misunderstood you.

I believe you did, because if you look at what I actually wrote, I did not
say that bringing up the topic of DRLs is an indication of bias:

 The association of TCPA with SNRLs is a perfect example of the bias and
 sensationalism which has surrounded the critical appraisals of TCPA.
 I fully support John's call for a fair and accurate evaluation of this
 technology by security professionals.  But IMO people like Ross Anderson
 and Lucky Green have disqualified themselves by virtue of their wild and
 inaccurate public claims.  Anyone who says that TCPA has SNRLs is making
 a political statement, not a technical one.

My core claim is the last sentence.  It's one thing to say, as you
are, that TCPA could make applications implement SNRLs more securely.
I believe that is true, and if this statement is presented in the context
of dangers of TCPA or something similar, it would be appropriate.
But even then, for a fair analysis, it should make clear that SNRLs can
be done without TCPA, and it should go into some detail about just how
much more effective a SNRL system would be with TCPA.  (I will write more
about this in responding to Joseph Ashwood.)

And to be truly unbiased, it should also talk about good uses of TCPA.

If you look at Ross Anderson's TCPA FAQ at
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html, he writes (question 4):

: When you boot up your PC, Fritz takes charge. He checks that the boot
: ROM is as expected, executes it, measures the state of the machine;
: then checks the first part of the operating system, loads and executes
: it, checks the state of the machine; and so on. The trust boundary, of
: hardware and software considered to be known and verified, is steadily
: expanded. A table is maintained of the hardware (audio card, video card
: etc) and the software (O/S, drivers, etc); Fritz checks that the hardware
: components are on the TCPA approved list, that the software components
: have been signed, and that none of them has a serial number that has
: been revoked.

He is not saying that TCPA could make SNRLs more effective.  He says
that Fritz checks... that none of [the software components] has a
serial number that has been revoked.  He is flatly stating that the
TPM chip checks a serial number revocation list.  That is both biased
and factually untrue.

Ross's whole FAQ is incredibly biased against TCPA.  I don't see how
anyone can fail to see that.  If it were titled FAQ about Dangers of
TCPA at least people would be warned that they were getting a one-sided
presentation.  But it is positively shameful for a respected security
researcher like Ross Anderson to pretend that this document is giving
an unbiased and fair description.

I would be grateful if someone who disagrees with me, who thinks that
Ross's FAQ is fair and even-handed, would speak up.  It amazes me that
people can see things so differently.

And Lucky's slide presentation, http://www.cypherpunks.to, is if anything
even worse.  I already wrote about this in detail so I won't belabor
the point.  Again, I would be very curious to hear from someone who
thinks that his presentation was unbiased.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: responding to claims about TCPA

2002-08-10 Thread AARG!Anonymous
 combination of access rghts and software
 environment.  ... Applications that might benefit include ... delivery
 of digital content (such as movies and songs).  (page 15).

Yes, DRM can clearly benefit from TCPA/Palladium.  And you might be
right that they are downplaying that now.  But the reason could be
that people have focused too much on it as the only purpose for TCPA,
just as you have done here.  So they are trying to play up the other
possibilities so as to get some balance in the discussion.


 Of course, they can't help writing in the DRM mindset regardless of
 their intent to confuse us.  In that July 2002 FAQ again:

   9. Does TCPA certify applications and OS's that utilize TPMs? 
   
   No.  The TCPA has no plans to create a certifying authority to
   certify OS's or applications as trusted.  The trust model the TCPA
   promotes for the PC is: 1) the owner runs whatever OS or
   applications they want; 2) The TPM assures reliable reporting of the
   state of the platform; and 3) the two parties engaged in the
   transaction determine if the other platform is trusted for the
   intended transaction.

 The transaction?  What transaction?  They were talking about the
 owner getting reliable reporting on the security of their applications
 and OS's and -- uh -- oh yeah, buying music or video over the Internet.

You are reading an awful lot into this one word transaction.  That
doesn't necessarily mean buying digital content.  In the abstract sense
transaction is sometimes used to refer to any exchange of information in
a protocol.  Even if we do stick to its commercial meaning, it can mean
a B2B exchange or any of a wide range of other e-commerce activities.
It's not specific to DRM by any means.


 Part of their misleading technique has apparently been to present no
 clear layman's explanations of the actual workings of the technology.
 There's a huge gap between the appealing marketing sound bites -- or
 FAQ lies -- and the deliberately dry and uneducational 400-page
 technical specs.  My own judgement is that this is probably
 deliberate, since if the public had an accurate 20-page document that
 explained how this stuff works and what it is good for, they would
 reject the tech instantly.

I agree that the documentation is a problem, but IMO it probably reflects
lack of resources rather than obfuscation.  I believe that TCPA has many
more applications than you and other critics are giving it credit for,
and that a good, clear explanation of what it could do would actually
gain it support.  Do a blog search at daypop.com to see what people are
really thinking about TCPA.  They read Ross Anderson's TCPA FAQ and take
it for gospel.  They believe TCPA has serial number revocations and all
these other features that are not described in any documents I have seen.
A good clear TCPA description could only improve its reputation, which
certainly can't go any lower than it is.


 Perhaps we in the community should write such a document.  Lucky and
 Adam Back seem to be working towards it.  The similar document about
 key-escrow (that CDT published after assembling a panel of experts
 including me, Whit, and Matt Blaze) was quite useful in explaining to
 lay people and Congressmen what was wrong with it.  NSA/DoJ had
 trouble countering it, since it was based on the published facts, and
 they couldn't impugn the credentials of the authors, nor the
 document's internal reasoning.

I agree in principle, but I am appalled that you believe that Lucky in
particular is heading in the right direction.  Adam on the other hand
has at least begun to study TCPA and was asking good questions about
Palladium before Peter Biddle flew the coop.  Will this document say
that TCPA is designed to support intelligence agency access to computers?
to kill free software?  and other such claims from Lucky's presentation?
If so, you will only hurt your cause.  On the other hand, if you do
come up with factual and unbiased information showing both good and bad
aspects of TCPA, as I think Adam has come close to doing a few times,
then it could be a helpful document.


 Intel and Microsoft and anonymous chauvanists can and should criticize
 such a document if we write one.  That will strengthen it by
 eliminating any faulty reasoning or errors of public facts.  But they
 had better bring forth new exculpating facts if they expect the
 authors to change their conclusions.

Conclusions should be based on technology.  TCPA can be rightly
criticized for weak protections of privacy, for ultimately depending on
the security of a few central keys and of possibly-weak hardware, and on
other technical grounds.  But you should not criticize it for supporting
DRM, or for making reverse engineering more difficult, because people
are under no obligation to give their creative works away for free,
or to make it easy for other people to copy their software.  Leave your
values at home and just present the facts.


 They're free to allege

Seth on TCPA at Defcon/Usenix

2002-08-10 Thread AARG!Anonymous

Seth Schoen of the EFF has a good blog entry about Palladium and TCPA
at http://vitanuova.loyalty.org/2002-08-09.html.  He attended Lucky's
presentation at DEF CON and also sat on the TCPA/Palladium panel at
the USENIX Security Symposium.

Seth has a very balanced perspective on these issues compared to most
people in the community.  It makes me proud to be an EFF supporter
(in fact I happen to be wearing my EFF T-shirt right now).

His description of how the Document Revocation List could work is
interesting as well.  Basically you would have to connect to a server
every time you wanted to read a document, in order to download a key
to unlock it.  Then if someone decided that the document needed
to un-exist, they would arrange for the server no longer to download
that key, and the document would effectively be deleted, everywhere.

I think this clearly would not be a feature that most people would accept
as an enforced property of their word processor.  You'd be unable to
read things unless you were online, for one thing.  And any document you
were relying on might be yanked away from you with no warning.  Such a
system would be so crippled that if Microsoft really did this for Word,
sales of vi would go through the roof.

It reminds me of an even better way for a word processor company to make
money: just scramble all your documents, then demand ONE MILLION DOLLARS
for the keys to decrypt them.  The money must be sent to a numbered
Swiss account, and the software checks with a server to find out when
the money has arrived.  Some of the proposals for what companies will
do with Palladium seem about as plausible as this one.

Seth draws an analogy with Acrobat, where the paying customers are
actually the publishers, the reader being given away for free.  So Adobe
does have incentives to put in a lot of DRM features that let authors
control publication and distribution.

But he doesn't follow his reasoning to its logical conclusion when dealing
with Microsoft Word.  That program is sold to end users - people who
create their own documents for the use of themselves and their associates.
The paying customers of Microsoft Word are exactly the ones who would
be screwed over royally by Seth's scheme.  So if we follow the money
as Seth in effect recommends, it becomes even more obvious that Microsoft
would never force Word users to be burdened with a DRL feature.

And furthermore, Seth's scheme doesn't rely on TCPA/Palladium.  At the
risk of aiding the fearmongers, I will explain that TCPA technology
actually allows for a much easier implementation, just as it does in so
many other areas.  There is no need for the server to download a key;
it only has to download an updated DRL, and the Word client software
could be trusted to delete anything that was revoked.  But the point
is, Seth's scheme would work just as well today, without TCPA existing.
As I quoted Ross Anderson saying earlier with regard to serial number
revocation lists, these features don't need TCPA technology.

So while I have some quibbles with Seth's analysis, on the whole it is
the most balanced that I have seen from someone who has no connection
with the designers (other than my own writing, of course).  A personal
gripe is that he referred to Lucky's critics, plural, when I feel
all alone out here.  I guess I'll have to start using the royal we.
But he redeemed himself by taking mild exception to Lucky's slide show,
which is a lot farther than anyone else has been willing to go in public.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread AARG!Anonymous

Anon wrote:
 You could even have each participant compile the program himself,
 but still each app can recognize the others on the network and
 cooperate with them.

Matt Crawford replied:
 Unless the application author can predict the exact output of the
 compilers, he can't issue a signature on the object code.  The
 compilers then have to be inside the trusted base, checking a
 signature on the source code and reflecting it somehow through a
 signature they create for the object code.

It's likely that only a limited number of compiler configurations would
be in common use, and signatures on the executables produced by each of
those could be provided.  Then all the app writer has to do is to tell
people, get compiler version so-and-so and compile with that, and your
object will match the hash my app looks for.
DEI

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



[no subject]

2002-08-09 Thread AARG!Anonymous

Adam Back writes a very thorough analysis of possible consequences of the
amazing power of the TCPA/Palladium model.  He is clearly beginning to
get it as far as what this is capable of.  There is far more to this
technology than simple DRM applications.  In fact Adam has a great idea
for how this could finally enable selling idle CPU cycles while protecting
crucial and sensitive business data.  By itself this could be a killer
app for TCPA/Palladium.  And once more people start thinking about how to
exploit the potential, there will be no end to the possible applications.

Of course his analysis is spoiled by an underlying paranoia.  So let me
ask just one question.  How exactly is subversion of the TPM a greater
threat than subversion of your PC hardware today?  How do you know that
Intel or AMD don't already have back doors in their processors that
the NSA and other parties can exploit?  Or that Microsoft doesn't have
similar backdoors in its OS?  And similarly for all the other software
and hardware components that make up a PC today?

In other words, is this really a new threat?  Or are you unfairly blaming
TCPA for a problem which has always existed and always will exist?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: TCPA/Palladium -- likely future implications

2002-08-09 Thread AARG!Anonymous

I want to follow up on Adam's message because, to be honest, I missed
his point before.  I thought he was bringing up the old claim that these
systems would give the TCPA root on your computer.

Instead, Adam is making a new point, which is a good one, but to
understand it you need a true picture of TCPA rather than the false one
which so many cypherpunks have been promoting.  Earlier Adam offered a
proposed definition of TCPA/Palladium's function and purpose:

 Palladium provides an extensible, general purpose programmable
 dongle-like functionality implemented by an ensemble of hardware and
 software which provides functionality which can, and likely will be
 used to expand centralised control points by OS vendors, Content
 Distrbuters and Governments.

IMO this is total bullshit, political rhetoric that is content-free
compared to the one I offered:

: Allow computers separated on the internet to cooperate and share data
: and computations such that no one can get access to the data outside
: the limitations and rules imposed by the applications.

It seems to me that my definition is far more useful and appropriate in
really understanding what TCPA/Palladium are all about.  Adam, what do
you think?

If we stick to my definition, you will come to understand that the purpose
of TCPA is to allow application writers to create closed spheres of trust,
where the application sets the rules for how the data is handled.  It's
not just DRM, it's Napster and banking and a myriad other applications,
each of which can control its own sensitive data such that no one can
break the rules.

At least, that's the theory.  But Adam points out a weak spot.  Ultimately
applications trust each other because they know that the remote systems
can't be virtualized.  The apps are running on real hardware which has
real protections.  But applications know this because the hardware has
a built-in key which carries a certificate from the manufacturer, who
is called the TPME in TCPA.  As the applications all join hands across
the net, each one shows his cert (in effect) and all know that they are
running on legitimate hardware.

So the weak spot is that anyone who has the TPME key can run a virtualized
TCPA, and no one will be the wiser.  With the TPME key they can create
their own certificate that shows that they have legitimate hardware,
when they actually don't.  Ultimately this lets them run a rogue client
that totally cheats, disobeys all the restrictions, shows the user all
of the data which is supposed to be secret, and no one can tell.

Furthermore, if people did somehow become suspicious about one particular
machine, with access to the TPME key the eavesdroppers can just create
a new virtual TPM and start the fraud all over again.

It's analogous to how someone with Verisign's key could masquerade as
any secure web site they wanted.  But it's worse because TCPA is almost
infinitely more powerful than PKI, so there is going to be much more
temptation to use it and to rely on it.

Of course, this will be inherently somewhat self-limiting as people learn
more about it, and realize that the security provided by TCPA/Palladium,
no matter how good the hardware becomes, will always be limited to
the political factors that guard control of the TPME keys.  (I say
keys because likely more than one company will manufacture TPM's.
Also in TCPA there are two other certifiers: one who certifies the
motherboard and computer design, and the other who certifies that the
board was constructed according to the certified design.  The NSA would
probably have to get all 3 keys, but this wouldn't be that much harder
than getting just one.  And if there are multiple manufacturers then
only 1 key from each of the 3 categories is needed.)

To protect against this, Adam offers various solutions.  One is to do
crypto inside the TCPA boundary.  But that's pointless, because if the
crypto worked, you probably wouldn't need TCPA.  Realistically most of the
TCPA applications can't be cryptographically protected.  Computing with
encrypted instances is a fantasy.  That's why we don't have all those
secure applications already.

Another is to use a web of trust to replace or add to the TPME certs.
Here's a hint.  Webs of trust don't work.  Either they require strong
connections, in which case they are too sparse, or they allow weak
connections, in which case they are meaningless and anyone can get in.

I have a couple of suggestions.  One early application for TCPA is in
closed corporate networks.  In that case the company usually buys all
the computers and prepares them before giving them to the employees.
At that time, the company could read out the TPM public key and sign
it with the corporate key.  Then they could use that cert rather than
the TPME cert.  This would protect the company's sensitive data against
eavesdroppers who manage to virtualize their hardware.

For the larger public network, the first thing I would suggest is that
the TPME key ought 

Re: Challenge to TCPA/Palladium detractors

2002-08-09 Thread AARG!Anonymous

Re the debate over whether compilers reliably produce identical object
(executable) files:

The measurement and hashing in TCPA/Palladium will probably not be done
on the file itself, but on the executable content that is loaded into
memory.  For Palladium it is just the part of the program called the
trusted agent.  So file headers with dates, compiler version numbers,
etc., will not be part of the data which is hashed.

The only thing that would really break the hash would be changes to the
compiler code generator that cause it to create different executable
output for the same input.  This might happen between versions, but
probably most widely used compilers are relatively stable in that
respect these days.  Specifying the compiler version and build flags
should provide good reliability for having the executable content hash
the same way for everyone.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Privacy-enhancing uses for TCPA

2002-08-03 Thread AARG!Anonymous

Here are some alternative applications for TCPA/Palladium technology which
could actually promote privacy and freedom.  A few caveats, though: they
do depend on a somewhat idealized view of the architecture.  It may be
that real hardware/software implementations are not sufficiently secure
for some of these purposes, but as systems become better integrated
and more technologically sound, this objection may go away.  And these
applications do assume that the architecture is implemented without secret
backdoors or other intentional flaws, which might be guaranteed through
an open design process and manufacturing inspections.  Despite these
limitations, hopefully these ideas will show that TCPA and Palladium
actually have many more uses than the heavy-handed and control-oriented
ones which have been discussed so far.

To recap, there are basically two technologies involved.  One is secure
attestation.  This allows machines to securely receive a hash of the
software which is running remotely.  It is used in these examples to
know that a trusted client program is running on the remote machine.
The other is secure storage.  This allows programs to encrypt data
in such a way that no other program can decrypt it.

In addition, we assume that programs are able to run unmolested;
that is, that other software and even the user cannot peek into the
program's memory and manipulate it or learn its secrets.  Palladium has
a feature called trusted space which is supposed to be some special
memory that is immune from being compromised.  We also assume that
all data sent between computers is encrypted using something like SSL,
with the secret keys being held securely by the client software (hence
unavailable to anyone else, including the users).

The effect of these technologies is that a number of computers across
the net, all running the same client software, can form their own
closed virtual world.  They can exchange and store data of any form,
and no one can get access to it unless the client software permits it.
That means that the user, eavesdroppers, and authorities are unable to
learn the secrets protected by software which uses these TCPA features.
(Note, in the sequel I will just write TCPA when I mean TCPA/Palladium.)

Now for a simple example of what can be done: a distributed poker game.
Of course there are a number of crypto protocols for playing poker on the
net, but they are quite complicated.  Even though they've been around
for almost 20 years, I've never seen game software which uses them.
With TCPA we can do it trivially.

Each person runs the same client software, which fact can be tested
using secure attestation.  The dealer's software randomizes a deck and
passes out the cards to each player.  The cards are just strings like
ace of spades, or perhaps simple numerical equivalents - nothing fancy.
Of course, the dealer's software learns in this way what cards every
player has.  But the dealer himself (i.e. the human player) doesn't
see any of that, he only sees his own hand.  The software keeps the
information secret from the user.  As each person makes his play, his
software sends simple messages telling what cards he is exposing or
discarding, etc.  At the end each person sends messages showing what
his hand is, according to the rules of poker.

This is a trivial program.  You could do it in one or two pages of code.
And yet, given the TCPA assumptions, it is just as secure as a complex
cryptographically protected version would be that takes ten times as
much code.

Of course, without TCPA such a program would never work.  Someone would
write a cheating client which would tell them what everyone else's cards
were when they were the dealer.  There would be no way that people could
trust each other not to do this.  But TCPA lets people prove to each
other that they are running the legitimate client.

So this is a simple example of how the secure attestation features of
TCPA/Palladium can allow a kind of software which would never work today,
software where people trust each other.  Let's look at another example,
a P2P system with anonymity.

Again, there are many cryptographic systems in the literature for
anonymous communication.  But they tend to be complicated and inefficient.
With TCPA we only need to set up a simple flooding broadcast network.
Let each peer connect to a few other peers.  To prevent traffic
analysis, keep each node-to-node link at a constant traffic level using
dummy padding.  (Recall that each link is encrypted using SSL.)

When someone sends data, it gets sent everywhere via a simple routing
strategy.  The software then makes the received message available to the
local user, if he is the recipient.  Possibly the source of the message
is carried along with it, to help with routing; but this information is
never leaked outside the secure communications part of the software,
and never shown to any users.

That's all there is to it.  Just send messages with flood broadcasts

RE: Challenge to David Wagner on TCPA

2002-08-02 Thread AARG!Anonymous

Peter Trei writes:

 It's rare enough that when a new anononym appears, we know
 that the poster made a considered decision to be anonymous.

 The current poster seems to have parachuted in from nowhere, 
 to argue a specific position on a single topic. It's therefore 
 reasonable  to infer that the nature of that position and topic has 
 some bearing on the decision to be anonymous.


Yes, my name is AARG!.  That was the first thing my mother said after
I was born, and the name stuck.

Not really.  For Peter's information, the name associated with a
message through an anonymous remailer is simply the name of the
last remailer in the chain, whatever that remailer operator chose
to call it.  AARG is a relatively new remailer, but if you look at
http://anon.efga.org/Remailers/TypeIIList you will see that it is very
reliable and fast.  I have been using it as an exit remailer lately
because other ones that I have used often produce inconsistent results.
It has not been unusual to have to send a message two or three times
before it appears.  So far that has not been a problem with this one.

So don't read too much into the fact that a bunch of anonymous postings
have suddenly started appearing from one particular remailer.  For your
information, I have sent over 400 anonymous messages in the past year
to cypherpunks, coderpunks, sci.crypt and the cryptography list (35
of them on TCPA related topics).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: Challenge to David Wagner on TCPA

2002-08-02 Thread AARG!Anonymous

Peter Trei envisions data recovery in a TCPA world:

 HoM:  I want to recover my data.
 Me:   OK: We'll pull the HD, and get the data off it.
 HoM:  Good - mount it as a secondary HD in my new system.
 Me:   That isn't going to work now we have TCPA and Palladium.
 HoM:  Well, what do you have to do?
 Me:   Oh, it's simple. We encrypt the data under Intel's TPME key,
  and send it off to Intel. Since Intel has all the keys, they can
  unseal all your data to plaintext, copy it, and then re-seal it for
  your new system. It only costs $1/Mb.
 HoM:  Let me get this straight - the only way to recover this data is
 to let
  Intel have a copy, AND pay them for it?
 Me:   Um... Yes. I think MS might be involved as well, if your were
 using
  Word.
 HoM:  You are *so* dead.

It's not quite as bad as all this, but it is still pretty bad.

You don't have to send your data to Intel, just a master storage key.
This key encrypts the other keys which encrypt your data.  Normally this
master key never leaves your TPM, but there is this optional feature
where it can be backed up, encrypted to the manufacturer's public key,
for recovery purposes.  I think it is also in blinded form.

Obviously you'd need to do this backup step before the TPM crashed;
afterwards is too late.  So maybe when you first get your system it
generates the on-chip storage key (called the SRK, storage root key),
and then exports the recovery blob.  You'd put that on a floppy or some
other removable medium and store it somewhere safe.  Then when your
system dies you pull out the disk and get the recovery blob.

You communicate with the manufacturer, give him this recovery blob, along
with the old TPM key and the key to your new TPM in the new machine.
The manufacturer decrypts the blob and re-encrypts it to the TPM in the
new machine.  It also issues and distributes a CRL revoking the cert on
the old TPM key so that the old machine can't be used to access remote
TCPA data any more.  (Note, the CRL is not used by the TPM itself, it is
just used by remote servers to decide whether to believe client requests.)

The manufacturer sends the data back to you and you load it into the TPM
in your new machine, which decrypts it and stores the master storage key.
Now it can read your old data.

Someone asked if you'd have to go through all this if you just upgraded
your OS.  I'm not sure.  There are several secure registers on the
TPM, called PCRs, which can hash different elements of the BIOS, OS,
and other software.  You can lock a blob to any one of these registers.
So in some circumstances it might be that upgrading the OS would keep the
secure data still available.  In other cases you might have to go through
some kind of recovery procedure.

I think this recovery business is a real Achilles heel of the TCPA
and Palladium proposals.  They are paranoid about leaking sealed data,
because the whole point is to protect it.  So they can't let you freely
copy it to new machines, or decrypt it from an insecure OS.  This anal
protectiveness is inconsistent with the flexibility needed in an imperfect
world where stuff breaks.

My conclusion is that the sealed storage of TCPA will be used sparingly.
Ross Anderson and others suggest that Microsoft Word will seal all of
its documents so that people can't switch to StarOffice.  I think that
approach would be far too costly and risky, given the realities I have
explained above.  Instead, I would expect that only highly secure data
would be sealed, and that there would often be some mechanism to recover
it from elsewhere.  For example, in a DRM environment, maybe the central
server has a record of all the songs you have downloaded.  Then if your
system crashes, rather than go through a complicated crypto protocol to
recover, you just buy a new machine, go to the server, and re-download
all the songs you were entitled to.

Or in a closed environment, like a business which seals sensitive
documents, the data could be backed up redundantly to multiple central
file servers, each of which seal it.  Then if one machine crashes,
the data is available from others and there is no need to go through
the recovery protocol.

So there are solutions, but they will add complexity and cost.  At the
same time they do add genuine security and value.  Each application and
market will have to find its own balance of the costs and benefits.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Challenge to David Wagner on TCPA

2002-08-01 Thread AARG!Anonymous

Eric Murray writes:
 TCPA (when it isn't turned off) WILL restrict the software that you
 can run.  Software that has an invalid or missing signature won't be
 able to access sensitive data[1].   Meaning that unapproved software
 won't work.

 [1] TCPAmain_20v1_1a.pdf, section 2.2

We need to look at the text of this in more detail.  This is from
version 1.1b of the spec:

: This section introduces the architectural aspects of a Trusted Platform
: that enable the collection and reporting of integrity metrics.
:
: Among other things, a Trusted Platform enables an entity to determine
: the state of the software environment in that platform and to SEAL data
: to a particular software environment in that platform.
:
: The entity deduces whether the state of the computing environment in
: that platform is acceptable and performs some transaction with that
: platform. If that transaction involves sensitive data that must be
: stored on the platform, the entity can ensure that that data is held in
: a confidential format unless the state of the computing environment in
: that platform is acceptable to the entity.
:
: To enable this, a Trusted Platform provides information to enable the
: entity to deduce the software environment in a Trusted Platform. That
: information is reliably measured and reported to the entity. At the same
: time, a Trusted Platform provides a means to encrypt cryptographic keys
: and to state the software environment that must be in place before the
: keys can be decrypted.

What this means is that a remote system can query the local TPM and
find out what software has been loaded, in order to decide whether to
send it some data.  It's not that unapproved software won't work,
it's that the remote guy can decide whether to trust it.

Also, as stated earlier, data can be sealed such that it can only be
unsealed when the same environment is booted.  This is the part above
about encrypting cryptographic keys and making sure the right software
environment is in place when they are decrypted.

 Ok, technically it will run but can't access the data,
 but that it a very fine hair to split, and depending on the nature of
 the data that it can't access, it may not be able to run in truth.

 If TCPA allows all software to run, it defeats its purpose.
 Therefore Wagner's statement is logically correct.

But no, the TCPA does allow all software to run.  Just because a remote
system can decide whether to send it some data doesn't mean that software
can't run.  And just because some data may be inaccessible because it
was sealed when another OS was booted, also doesnt mean that software
can't run.

I think we agree on the facts, here.  All software can run, but the TCPA
allows software to prove its hash to remote parties, and to encrypt data
such that it can't be decrypted by other software.  Would you agree that
this is an accurate summary of the functionality, and not misleading?

If so, I don't see how you can get from this to saying that some software
won't run.  You might as well say that encryption means that software
can't run, because if I encrypt my files then some other programs may
not be able to read them.

Most people, as you may have seen, interpret this part about software
can't run much more literally.  They think it means that software needs
a signature in order to be loaded and run.  I have been going over and
over this on sci.crypt.  IMO the facts as stated two paragraphs up are
completely different from such a model.

 Yes, the spec says that it can be turned off.  At that point you
 can run anything that doesn't need any of the protected data or
 other TCPA services.   But, why would a software vendor that wants
 the protection that TCPA provides allow his software to run
 without TCPA as well, abandoning those protections?

That's true; in fact if you ran it earlier under TCPA and sealed some
data, you will have to run under TCPA to unseal it later.  The question
is whether the advantages of running under TCPA (potentially greater
security) outweigh the disadvantages (greater potential for loss of
data, less flexibility, etc.).

 I doubt many would do so, the majority of TCPA-enabled
 software will be TCPA-only.  Perhaps not at first, but eventually
 when there are enough TCPA machines out there.  More likely, spiffy
 new content and features will be enabled if one has TCPA and is
 properly authenticated, disabled otherwise.  But as we have seen
 time after time, today's spiffy new content is tomorrows
 virtual standard.

Right, the strongest case will probably be for DRM.  You might be able
to download all kinds of content if you are running an OS and application
that the server (content provider) trusts.  People will have a choice of
using TCPA and getting this data legally, or avoiding TCPA and trying to
find pirated copies as they do today.

 This will require the majority of people to run with TCPA turned on
 if they want the content.  TCPA doesn't need to be required by law,

Re: Ross's TCPA paper

2002-06-24 Thread Anonymous

The amazing thing about this discussion is that there are two pieces
of conventional wisdom which people in the cypherpunk/EFF/freedom
communities adhere to, and they are completely contradictory.

The first is that protection of copyright is ultimately impossible.
See the analysis in Schneier and Kelsey's Street Performer Protocol
paper, http://www.counterpane.com/street_performer.pdf.  Or EFF
columnist Cory Doctorow's recent recitation of the conventional wisdom
at http://boingboing.net/2002_06_01_archive.html#85167215: providing
an untrusted party with the key, the ciphertext and the cleartext but
asking that party not to make a copy of your message is just silly,
and can't possibly work in a world of Turing-complete computing.

The second is that evil companies are going to take over our computers
and turn us into helpless slaves who can only sit slack-jawed as they
force-feed us whatever content they desire, charging whatever they wish.
The recent outcry over TCPA falls into this category.

Cypherpunks alternate between smug assertions of the first claim and
panicked wailing about the second.  The important point about both of
them, from the average cypherpunk's perspective, is that neither leaves
any room for action.  Both views are completely fatalistic in tone.
In one, we are assured victory; in the other, defeat.  Neither allows
for human choice.

Let's apply a little common sense for a change, and analyze the situation
in the context of a competitive market economy.  Suppose there is no
law forcing people to use DRM-compliant systems, and everyone can decide
freely whether to use one or not.

This is plausible because, if we take the doom-sayers at their word,
the Hollings bill or equivalent is completely redundant and unnecessary.
Intel and Microsoft are already going forward.  The BIOS makers are
on board; TPM chips are being installed.  In a few years there will
be plenty of TCPA compliant systems in use and most new systems will
include this functionality.

Furthermore, inherent to the TCPA concept is that the chip can in
effect be turned off.  No one proposes to forbid you from booting a
non-compliant OS or including non-compliant drivers.  However the TPM
chip, in conjunction with a trusted OS, will be able to know that you
have done so.  And because the chip includes an embedded, certified key,
it will be impossible to falsely claim that your system is running in a
trusted mode - only the TPM chip can convincingly make that claim.

This means that whether the Hollings bill passes or not, the situation
will be exactly the same.  People running in trusted mode can prove
it; but anyone can run untrusted.  Even with the Hollings bill there
will still be people using untrusted mode.  The legislation would
not change that.  Therefore the Hollings bill would not increase the
effectiveness of the TCPA model.  And it follows, then, that Lucky and
Ross are wrong to claim that this bill is intended to legislate use of
the TCPA.  The TCPA does not require legislation.

Actually the Hollings bill is clearly targeted at the analog hole, such
as the video cable that runs from your PC to the display, or the audio
cable to your speakers.  Obviously the TCPA does no good in protecting
content if you can easily hook an A/D converter into those connections and
digitize high quality signals.  The only way to remove this capability
is by legislation, and that is clearly what the Hollings bill targets.
So much for the claim that this bill is intended to enforce the TCPA.

That claim is ultimately a red herring.  It doesn't matter if the bill
exists, what matters is that TCPA technology exists.  Let us imagine a
world in which most new PCs have TCPA built-in, Microsoft OS's have been
adapted to support it, maybe some other OS's have been converted as well.

The ultimate goal, according to the doom-sayers, is that digital content
will only be made available to people who are running in trusted
mode as determined by the TPM chip built into their system.  This will
guarantee that only an approved OS is loaded, and only approved drivers
are running.  It will not be possible to patch the OS or insert a custom
driver to intercept the audio/video stream.  You won't be able to run
the OS in a virtual mode and provide an emulated environment where you
can tap the data.  Your system will display the data for you, and you
will have no way to capture it in digital form.

Now there are some obvious loopholes here.  Microsoft software has a
track record of bugs, and let's face it, Linux does, too.  Despite the
claims, the TCPA by itself does nothing to reduce the threat of viruses,
worms, and other bug-exploiting software.  At best it includes a set of
checksums of key system components, but you can get software that does
that already.  Bugs in the OS and drivers may be exploitable and allow
for grabbing DRM protected content.  And once acquired, the data can
be made widely available.  No doubt the OS will be built to allow 

Re: Lucky's 1024-bit post

2002-05-13 Thread Anonymous

On Tue, 30 Apr 2002 at 17:36:29 -0700, Wei Dai wrote:
 On Wed, May 01, 2002 at 01:37:09AM +0200, Anonymous wrote:
  For about $200 you can buy a 1000 MIPS CPU, and the memory needed for
  sieving is probably another couple of hundred dollars.  So call it $500
  to get a computer that can sieve 1000 MIPS years in a year.

 You need a lot more than a couple of hundred dollars for the memory, 
 because you'll need 125 GB per machine. See Robert Silverman's post at 
 
http://groups.google.com/groups?hl=enselm=8626nu%24e5g%241%40nnrp1.deja.comprev=/groups%3Fq%3D1024%2Bsieve%2Bmemory%26start%3D20%26hl%3Den%26scoring%3Dd%26selm%3D8626nu%2524e5g%25241%2540nnrp1.deja.com%26rnum%3D21

 According to pricewatch.com, 128MB costs $14, so each of your sieving 
 machines would cost about $14000 instead of $500.

Silverman's comment makes sense; the memory needed is probably
proportional to the size of the factor base, and going from 512 to 1024
bits would plausibly increase the factor base size by at least 11 bits,
corresponding to a memory increase of a factor of ~ 2500 as he says.
If the 512 bit factorization used 50 MB per node for the sieving then
that would require extreme amounts of per node memory for 1024 bits.

But how about using disk space instead of RAM for most of this?  Seems
like a sieve algorithm could have relatively linear and predictable memory
access patterns.  With a custom read-ahead DMA interface to the disk it
might be possible to run at high speed using only a fraction of the RAM,
acting as a disk buffer.  A 125 GB disk costs a few hundred dollars,
so that might bring the node cost back down to the $1000 range.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Lucky's 1024-bit post

2002-05-12 Thread Anonymous

Wei Dai writes:
 Using a factor base size of 10^9, in the relationship finding phase you
 would have to check the smoothness of 2^89 numbers, each around 46 bits
 long. (See Frog3's analysis posted at
 http://www.mail-archive.com/cryptography%40wasabisystems.com/msg01833.html.  
 Those numbers look correct to me.)  If you assume a chip that can check
 one number per microsecond, you would need 10^13 chips to be able to
 complete the relationship finding phase in 4 months. Even at one dollar
 per chip this would cost ten trillion dollars (approximately the U.S. 
 GDP).

This is probably not the right way to approach the problem.  Bernstein's
relation-finding proposal to directly use ECM on each value, while
asymptotically superior to conventional sieving, is unlikely to be
cost-effective for 1024 bit keys.  Better to extrapolate from the recent
sieving results.

http://citeseer.nj.nec.com/cavallar00factorization.html is the paper
from Eurocrypt 2000 describing the first 512 bit RSA factorization.
The relation-finding phase took about 8000 MIPS years.  Based on the
conventional asymptotic formula, doing the work for a 1024 bit key
should take about 10^7 times as long or 80 billion MIPS years.

For about $200 you can buy a 1000 MIPS CPU, and the memory needed for
sieving is probably another couple of hundred dollars.  So call it $500
to get a computer that can sieve 1000 MIPS years in a year.

If we are willing to take one year to generate the relations then
($500 / 1000) x 8 x 10^10 is $40 billion dollars, used to buy
approximately 80 million cpu+memory combinations.  This will generate
the relations to break a 1024 bit key in a year.  If you need it in less
time you can spend proportionately more.  A $400 billion dollare machine
could generate the relations in about a month.  This would be about 20%
of the current annual U.S. federal government budget.

However if you were limited to a $1 billion budget as the matrix
solver estimate assumed, the machine would take 40 years to generate
the relations.

 BTW, if we assume one watt per chip, the machine would consume 87 trillion
 kWh of eletricity per year. The U.S. electricity production was only 3.678 
 trillion kWh in 1999.

The $40 billion, 1-year sieving machine draws on the order of 10 watts
per CPU so would draw about 800 megawatts in total, adequately supplied
by a dedicated nuclear reactor.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: objectivity and factoring analysis

2002-04-29 Thread Anonymous

Nicko van Someren writes:
 I used the number 10^9 for the factor base size (compared to about
 6*10^6 for the break of the 512 bit challenge) and 10^11 for the
 weight of the matrix (compared to about 4*10^8 for RSA512).  Again
 these were guesses and they certainly could be out by an order of
 magnitude.

In his paper Bernstein uses a relatively larger factor base than in
typical current choices of parameters.  It's likely that the factor
bases which have been used in the past are too small in the sense that
the linear algebra step is being limited by machine size rather than
runtime, because of the difficulty of parallelizing it.  For example in
http://www.loria.fr/~zimmerma/records/RSA155 we find that the sieving took
8000 mips years but the linear algebra took 224 CPU hours on a 2GB Cray.
If there were a larger machine to do the matrix solution, the whole
process could be accelerated, and that's what Bernstein's figures assume.

Specifically he uses a factor base size of L^.7904, where L for 1024 bit
keys is approximately 2^45.  This is a matrix size of about 50 billion,
50 times larger than your estimate.  So a closer order of magnitude
esimate would be 10^11 for the factor base size and 10^13 for the weight
of the matrix.

 The matrix reduction cells are pretty simple and my guess was
 that we could build the cells plus inter-cell communication
 in about 1000 transistors.  I felt that, for a first order guess,
 we could ignore the transistors in the edge drivers since for a
 chip with N cells there are only order N^(1/2) edge drivers.
 Thus I guessed 10^14 transistors which might fit onto about 10^7
 chips which in volume (if you own the fabrication facility) cost
 about $10 each, or about $10^8 for the chips.  Based on past work
 in estimating the cost of large systems I then multiplied this
 by three or four to get a build cost.

The assumption of a larger factor base necessary for the large asymptotic
speedups would increase the cost estimate by a factor of about 50.
Instead of several hundred million dollars, it would be perhaps 10-50
billion dollars.  Of course at this level of discussion it's just as
easy to assume that the adversary spends $50 billion as $500 million;
it's all completely hypothetical.

 As far at the speed goes, this machine can compute a dot product
 in about 10^6 cycles.

Actually the sort algorithm described takes 8*sqrt(10^11) or about 2.5 *
10^6 cycles, and there are three sorts per dot product, so 10^7 cycles
would be a better estimate.

Using the larger factor base with 10^13 entries would imply a sort
time of 10^8 cycles, by this reasoning.

 Initially I thought that the board to
 board communication would be slow and we might only have a 1MHz
 clock for the long haul communication, but I messed up the total
 time and got that out as a 1 second matrix reduction.  In fact to
 compute a kernel takes about 10^11 times longer.  Fortunately it
 turns out that you can drive from board to board probably at a
 few GHz or better (using GMII type interfaces from back planes
 of network switches).  If we can get this up to 10GHz (we do have
 lots to spend on RD here) we should be able to find a kernel in
 somewhere around 10^7 seconds, which is 16 weeks or 4 months.

Taking into consideration that the sort algorithm takes about 8 times
longer than you assumed, and that a few minimal polynomials have to
be calculated to get the actual one, this adds about a factor of 20
over your estimate.  Instead of 4 months it would be more like 7 years.
This is pretty clearly impractical.

Apparently Ian Goldberg expressed concerns about the interconnections
when the machine was going to run at 1 MHz.  Now it is projected to run
10,000 times faster?  That's an aggressive design.  Obviously if this
speed cannot be achieved the run time goes up still more.  If only 1
GHz can be achieved rack to rack then the machine takes 70 years for one
factorization.  Needless to say, any bit errors anywhere will destroy the
result which may have taken years to produce, requiring error correction
to be used, adding cost and possibly slowing the effective clock rate.

Using the larger factor base from the Bernstein paper would increase
the time to something like 10^11 seconds, thousands of years, which is
out of the question.

 Lastly, I want to reiterate that these are just estimates.  I
 give them here because you ask.  I don't expect them to be used
 for the design of any real machines; much more research is
 needed before that.  I do however think that they are rather
 more accurate than my first estimates.

These estimates are very helpful.  Thanks for providing them.  It seems
that, based on the factor base size derived from Bernstein's asymptotic
estimates, the machine is not feasible and would take thousands of years
to solve a matrix.  If the 50 times smaller factor base can be used,
the machine is on the edge of feasibility but it appears that it would
still take years to factor a single value.


Re: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis

2002-04-29 Thread Anonymous

Lucky Green writes:
 Given how panels are assembled and the role they fulfill, I thought it
 would be understood that when one writes that certain results came out
 of a panel that this does not imply that each panelist performed the
 same calculations. But rather that that the information gained from a
 panel (Ian: math appears to be correct, Nicko: if the math is correct,
 these are the engineering implications of the math) are based on the
 combined input from the panelists. My apologies if this process of a
 panel was not understood by all readers and some readers therefore
 interpreted my post to indicate that both Ian and Nicko performed
 parallel engineering estimates.

What he wrote originally was:

: The panel, consisting of Ian Goldberg and Nicko van Someren, put forth
: the following rough first estimates:
:
: While the interconnections required by Bernstein's proposed architecture
: add a non-trivial level of complexity, as Bruce Schneier correctly
: pointed out in his latest CRYPTOGRAM newsletter, a 1024-bit RSA
: factoring device can likely be built using only commercially available
: technology for a price range of several hundred million dollars to about
: 1 billion dollars
: Bernstein's machine, once built, ... will be able to break a 1024-bit
: RSA or DH key in seconds to minutes.

It's not a matter of assuming parallel engineering estimates, but rather
the implication here is that Ian endorsed the results.  In saying that
the panel put forth a result, and the panel is composed of named people,
it implies that the named people put forth the result.  The mere fact
that Ian found it necessary to immediately post a disclaimer makes it
clear how misleading this phrasing was.

Another problem with Lucky's comment is that somewhere between Nicko's
thinking and Lucky's posting, the fact was dropped that only the matrix
solver was being considered.  This is only 1/2 the machine; in fact in
most factoring efforts today it is the smaller part of the whole job.
Neither Nicko nor Ian nor anyone else passed judgement on the equally
crucial question of whether the other part of the machine was buildable.

 It was not until at least a week after FC that I contacted Nicko
 inquiring if he still believed that his initial estimates were correct,
 now that that he had some time to think about it. He told me that the
 estimates had not changed.

It is obvious that in fact Nicko had not spent much time going over
his figures, else he would have immediately spotted the factor of 10
million error in his run time estimate.  Saying that his estimates had
not changed is meaningless if he has not reviewed them.

Lucky failed to make clear the cursory nature of these estimates, that the
machine build cost was based on a hurried hour's work before the panel,
and that the run time was based on about 5 seconds calculation during
the panel itself.  It's not relevant whether this was in part Nicko's
fault for perhaps not making clear to Lucky that the estimate stood in
the same shape a week later.  But it was Lucky who went public with the
claim, so he must take the blame for the inaccuracy.

In fact, if Lucky had passed his incendiary commentary to Nicko and
Ian for review before publishing it, it is clear that they would have
asked for corrections.  Ian would have wanted to remove his name from
the implied endorsement of the numeric results, and Nicko would have
undoubtedly wanted to see more caveats placed on figures which were
going to be attached to his name all over the net, as well as making
clear that he was just talking about the matrix solution.  Of course
this would have removed much of the drama from Lucky's story.

The moral is if you're going to quote people, you're obligated to check
the accuracy of the quotes.  Lucky is not a journalist but in this
instance he is playing one on the net, and he deserves to be criticized
for committing such an elementary blunder, just as he would deserve
credit for bringing a genuine breakthrough to wide attention.

 For example, Bruce has been quoted in a widely-cited eWeek article that
 I don't assume that someone with a massive budget has already built
 this machine, because I don't believe that the machine can be built.

 Bruce shortly thereafter stated in his Cryptogram newsletter that I
 have long believed that a 1024-bit key could fall to a machine costing
 $1 billion.

 Since these quotes describe mutually exclusive view points, we have an
 example of what can happen when a debate spills over into the popular
 media.
 ...
 http://www.eweek.com/article/0,3658,s=712a=24663,00.asp

They are not mutually exclusive, and the difference is clear.  In the
first paragraph, Bruce is saying that Bernstein's design is not practical.
To get his asymptotic results of 3x key length, Bernstein must forego the
use of sieving and replace it with a parallel ECM factoring algorithm
to determine smoothness.  Asymptotically, this is a much lower cost
approach for finding relations, 

Re: objectivity and factoring analysis

2002-04-21 Thread Anonymous

Nicko van Someren writes:

 The estimate
 of the cost of construction I gave was some hundreds of
 millions of dollars, a figure by which I still stand.

But what does that mean, to specify (and stand by) the cost of
construction of a factoring machine, without saying anything about how
fast it runs?  Heck, we could factor 1024 bit numbers with a large abacus,
if we don't care about speed.  A cost figure is meaningless unless in the
context of a specific performance goal.

 I was then asked how fast this machine would run and I tried
 to do the calculation on the spot without a copy of the
 proposal to hand, and came up with a figure on the order
 of a second based on very conservative hardware design.
 This figure is *wildly* erroneous as a result of both not
 having the paper to hand and also not even having an
 envelope on the back of which I could scratch notes.

And yet here you say that it took you completely by surprise when someone
asked how fast the machine would run.  In all of your calculations on the
design of the machine, you had apparently never calculated how fast it
would be.

How could this be?  Surely in creating your hundreds of millions
of dollars estimate you must have based that on some kind of speed
consideration.  How else could you create the design?  This seems very
confusing.

And, could you clarify just a few more details, like what was the size
you were assuming for the factor base upper bounds, and equivalently for
the size of the matrix?  This would give us a better understanding of the
requirements you were trying to meet.  And then, could you even go so far
as to discuss clock speeds and numbers of processing and memory elements?
Just at a back of the envelope level of detail?

Adam Back wrote:
 The mocking tone of recent posts about Lucky's call seems quite
 misplaced given the checkered bias and questionable authority of the
 above conflicting claims we've seen quoted.

No, Lucky made a few big mistakes.  First, he invoked Ian Goldberg's
name as a source of the estimate, which was wrong.  Second, he presented
Nicko's estimate as being more authoritative than it actually was,
as Nicko makes clear here.  And third, he fostered panic by precipitously
revoking his key and widely promulgating his sky is falling message.

We wouldn't be in this situation of duelling bias and authority if
people would provide some minimal facts and figures rather than making
unsubstantiated claims.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Schneier on Bernstein factoring machine

2002-04-16 Thread Anonymous

Bruce Schneier writes in the April 15, 2002, CRYPTO-GRAM,
http://www.counterpane.com/crypto-gram-0204.html:

 But there's no reason to panic, or to dump existing systems.  I don't think 
 Bernstein's announcement has changed anything.  Businesses today could 
 reasonably be content with their 1024-bit keys, and military institutions 
 and those paranoid enough to fear from them should have upgraded years ago.

 To me, the big news in Lucky Green's announcement is not that he believes 
 that Bernstein's research is sufficiently worrisome as to warrant revoking 
 his 1024-bit keys; it's that, in 2002, he still has 1024-bit keys to revoke.

Does anyone else notice the contradiction in these two paragraphs?
First Bruce says that businesses can reasonably be content with 1024 bit
keys, then he appears shocked that Lucky Green still has a 1024 bit key?
Why is it so awful for Lucky to still have a key of this size, if 1024
bit keys are good enough to be reasonably content about?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [Announce] Announcing a GnuPG plugin for Mozilla (Enigmail)

2002-03-23 Thread Anonymous User

  From: R. Saravanan [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Date: Wed, 20 Mar 2002 12:50:51 -0700
 
 Enigmail, a GnuPG plugin for Mozilla which has been under development
 for some time, has now reached a state of practical usability with the
 Mozilla 0.9.9 release. It allows you to send or receive encrypted mail
 using the Mozilla mailer and GPG. Enigmail is open source and dually
 licensed under GPL/MPL. You can download and install the software from
 the website http://enigmail.mozdev.org
 
 Enigmail is cross-platform like Mozilla, although binaries are supplied
 only for the Win32 and Linux-x86 platforms on the website.At the moment
 there is no version of Enigmail available for Netscape 6.2 or earlier,
 which are based on much older versions of Mozilla.There will be a
 version available for the next Netscape release, which is expected to be
 based on Mozilla 1.0.
 
 You may post enigmail-specific comments to the Enigmail
 newsgroup/mailing list at mozdev.org
 
 
 ___
 Gnupg-announce mailing list
 [EMAIL PROTECTED]
 http://lists.gnupg.org/mailman/listinfo/gnupg-announce

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: CFP: PKI research workshop

2002-01-07 Thread Anonymous

Russ Neson writes:
 3. Cryptography, and therefore PKI, is meaningless unless you first
 define a threat model.  In all the messages with this Subject, I've
 only see one person even mention threat model.  Think about the
 varying threat models, and the type of cryptography one would propose
 to address them.  Even the most common instance of encryption,
 encrypted web forms for hiding credit card numbers, suffers from
 addressing a limited threat model.  There's a hell of a lot of known
 plaintext there.

It's not clear what you mean by the limited threat model in encrypting web
forms, but one correction is necessary: known plaintext is not an issue.

See the sci.crypt thread Known plaintext considered harmless from June,
2001 (available by advanced search at groups.google.com).  Especially note
the perceptive comments by David Wagner and David Hopwood.  There is no
need to be concerned that encrypted web forms contain known plaintext:
no plausible threat model can exploit that information.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]