The RIAA Succeeds Where the CypherPunks Failed

2003-12-18 Thread John Gilmore
From: [EMAIL PROTECTED]
Sent: Wednesday, December 17, 2003 12:29 PM
To: [EMAIL PROTECTED]
Subject: [NEC] #2.12: The RIAA Succeeds Where the CypherPunks Failed

NEC @ Shirky.com, a mailing list about Networks, Economics, and Culture

Published periodically / #2.12 / December 17, 2003
Subscribe at http://shirky.com/nec.html
   Archived at http://shirky.com
   Social Software weblog at http://corante.com/many/

In this issue:

  - Introduction
  - Essay: The RIAA Succeeds Where the Cypherpunks Failed
  Also at http://www.shirky.com/writings/riaa_encryption.html
  - Worth Reading:
 - GrokLaw: MVP of the SCO Wars
 - Tom Coates Talks With A Slashdot Troller

* Introduction ===

The end of another year. Thank you all for reading. See you in January.

-clay

* Essay ==

The RIAA Succeeds Where the Cypherpunks Failed
   http://www.shirky.com/writings/riaa_encryption.html

For years, the US Government has been terrified of losing surveillance
powers over digital communications generally, and one of their biggest
fears has been broad public adoption of encryption. If the average user
were to routinely encrypt their email, files, and instant messages,
whole swaths of public communication currently available to law
enforcement with a simple subpoena (at most) would become either
unreadable, or readable only at huge expense.

The first broad attempt by the Government to deflect general adoption of
encryption came 10 years ago, in the form of the Clipper Chip
[http://www.epic.org/crypto/clipper/]. The Clipper Chip was part of a
proposal for a secure digital phone that would only work if the
encryption keys were held in such a way that the Government could get to
them. With a pair of Clipper phones, users could make phone calls secure
from everyone except the Government.

Though opposition to Clipper by civil liberties groups was swift and
extreme [1] the thing that killed it was work by Matt Blaze, a Bell Labs
security researcher, showing that the phone's wiretap capabilities could
be easily defeated [2], allowing Clipper users to make calls that even
the Government couldn't decrypt. (Ironically, ATT had designed the
phones originally, and had a contract to sell them before Blaze sunk the
project.)

[2]
http://cpsr.org/cpsr/privacy/crypto/clipper/clipper_nist_escrow_comments
/
[3]
http://www.interesting-people.org/archives/interesting-people/199406/msg
6.html

The Government's failure to get the Clipper implemented came at a heady
time for advocates of digital privacy -- the NSA was losing control of
cryptographic products, Phil Zimmerman had launched his Pretty Good
Privacy (PGP) email program, and the Cypherpunks, a merry band of
crypto-loving civil libertarians, were on the cover of
[http://www.wired.com/wired/archive/1.02/crypto.rebels.html] the second
issue of Wired. The floodgates were opening, leading to...

...pretty much nothing. Even after the death of Clipper and the launch
of PGP, the Government discovered that for the most part, users didn't
_want_ to encrypt their communications. The single biggest barrier to
the spread of encryption has turned out to be not control but apathy.
Though business users encrypt sensitive data to hide it from one
another, the use of encryption to hide private communications from the
Government has been limited mainly to techno-libertarians and a small
criminal class.

The reason for this is the obvious one: the average user has little to
hide, and so hides little. As a result, 10 years on, e-mail is still
sent as plain text, files are almost universally unsecured, and so on.
The Cypherpunk fantasy of a culture that routinely hides both legal and
illegal activities from the state has been defeated by a giant
distributed veto. Until now.

It may be time to dust off that old issue of Wired, because the RIAA is
succeeding where 10 years of hectoring by the Cypherpunks failed. When
shutting down Napster turned out to have all the containing effects of
stomping on a tube of toothpaste, the RIAA switched to suing users
directly. This strategy has worked much better than shutting down
Napster did, convincing many users to stop using public file sharing
systems, and to delete MP3s from their hard drives. However, to sue
users, they had to serve a subpoena, and to do that, they had to get
their identities from the user's internet service providers.

Identifying those users has had a second effect, and that's to create a
real-world version of the scenario that drove the invention of
user-controlled encryption in the first place. Whitfield Diffie,
inventor of public key encryption
[http://www.webopedia.com/TERM/P/public_key_cryptography.html], the
strategy that underlies most of today's cryptographic products, saw the
problem as a version of "Who will guard the guardians?"

In any system where a user's

Re: Super-Encryption

2003-12-18 Thread Amir Herzberg
At 16:36 17/12/2003,  Matt wrote:
Ben, Amir, et.al.

I see that cipher1 has no transparent value. Therefore, the XML-Encrypted
message see ( http://www.w3.org/TR/xmlenc-core/ ) must transport
(1) symmetric_IV
(2) Sign_RSA_Receiver_PK(symmetric_Key)
(3) cipher
(4) Sign_RSA_Sender(SHA1(message))
This is still not very good. Comments:

a. In (2) you obviously mean Encrypt_RSA not Sign_RSA

b. In (4) you again send the hash of the plaintext in the clear. As I 
explained in my previous note, this is insecure, e.g. if plaintext is taken 
from a reasonably sized set (which is common), attacker can find the 
plaintext by hashing all the possible values. There are two fixes to this: 
sign the encrypted message and public key (which we proved secure for most 
PKCS including RSA) or encrypt the signed message (which may be vulnerable 
to Krawczyk/Bleichenbacher's attacks).

c. Notice also (again as I wrote before...) that you don't achieve your 
stated goal of identifying the intended receiver. This is also solved if 
you sign the ciphertext and the receiver's public key, or simply sign the 
identity of the receiver.

Anyway, I am repeating myself, so...

Best regards,

Amir Herzberg
Computer Science Department, Bar Ilan University
Lectures: http://www.cs.biu.ac.il/~herzbea/book.html
Homepage: http://amir.herzberg.name
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


[Publicity-list]: DIMACS/PORTIA Workshop on Privacy-Preserving Data Mining

2003-12-18 Thread Linda Casals
*
  
 DIMACS/PORTIA Workshop on Privacy-Preserving Data Mining
  
 March 15 - 16, 2004
 DIMACS Center, Rutgers University, Piscataway, NJ

Organizers: 

   Cynthia Dwork, Microsoft, dwork at microsoft.com  
   Benny Pinkas, HP Labs, benny.pinkas at hp.com  
   Rebecca Wright, Stevens Institute of Technology, 
rwright at cs.stevens-tech.edu 

Presented under the auspices of the Special Focus on Communication
Security and Information Privacy, and the PORTIA project.



This workshop and working group will bring together researchers and
practitioners in cryptography, data mining, and other areas to discuss
privacy-preserving data mining. The workshop sessions on March 15 and
16, 2004 will consist of invited talks and discussion. March 17, 2004
will be a "working group" of invited participants to identify and
explore approaches that could serve as the basis for more
sophisticated algorithms and implementations than presently exist, and
to discuss directions for further research and collaboration.

Both the workshop and working group will investigate the construction
and exploitation of "private" databases, e.g.

 * Merging information from multiple data sets in a consistent,
   secure, efficient and privacy-preserving manner;
 * Sanitizing databases to permit privacy-preserving public study.

In a wide variety of applications it would be useful to be able to
gather information from several different data sets. The owners of
these data sets may not be willing, or legally able, to share their
complete data with each other. The ability to collaborate without
revealing information could be instrumental in fostering inter-agency
collaboration.

Particular topics of interest include:

* Secure multi-party computation. This is a very general and 
  well-studied paradigm that unfortunately has not been used in
  practice so far. We will investigate ways to make it more
  efficient and encourage its deployment.
* Statistical techniques such as data swapping,
  post-randomization, and perturbation.
* Articulation of different notions and aspects of privacy.
* Tradeoffs between privacy and accuracy.
* Architectures that facilitate private queries by a
  (semi-trusted) third party.
* Methods for handling different or incompatible formats, 
  and erroneous data. We will investigate ideas from dimension 
  reduction, clustering and searching strategy.

**
Registration Fees:

(Pre-registration deadline: March 8, 2004)

Regular Rate 
Preregister before deadline $120/day 
After preregistration deadline  $140/day

Reduced Rate*
Preregister before deadline $60/day
After preregistration deadline $70/day

Postdocs 
Preregister before deadline $10/day 
After preregistration deadline $15/day

DIMACS Postdocs $0 

Non-Local Graduate & Undergraduate students 
Preregister before deadline $5/day 
After preregistration deadline $10/day

Local Graduate & Undergraduate students $0
(Rutgers & Princeton) 

DIMACS partner institution employees** $0 

DIMACS long-term visitors*** $0 

Registration fee to be collected on site, cash, check, VISA/Mastercard
accepted.

Our funding agencies require that we charge a registration fee during
the course of the workshop. Registration fees include participation in
the workshop, all workshop materials, breakfast, lunch, breaks and any
scheduled social events (if applicable).

* College/University faculty and employees of nonprofit and government
organizations will automatically receive the reduced rate. Other
participants may apply for a reduction of fees. They should email
their request for the reduced fee to the Workshop Coordinator at
[EMAIL PROTECTED] Include your name, the Institution you
work for, your job title and a brief explanation of your
situation. All requests for reduced rates must be received before the
pre-registration deadline. You will promptly be notified as to the
decision about it.

** Fees for employees of DIMACS partner institutions are
waived. DIMACS partner institutions are: Rutgers University, Princeton
University, AT&T Labs - Research, Bell Labs, NEC Laboratories America
and Telcordia Technologies. Fees for employees of DIMACS affiliate
members Avaya Labs, IBM Research and Microsoft Research are also
waived.

***DIMACS long-term visitors who are in residence at DIMACS for two or
more weeks inclusive of dates of workshop.

*
Information on participation, registration, accomodations, and travel 
can be found at:

http://dimacs.rutgers.edu/Workshops/Privacy/

   **PLEASE BE SURE TO PRE-REGISTER EARLY**



-
The Cryptography Mailing List
U

Re: Difference between TCPA-Hardware and other forms of trust

2003-12-18 Thread Jerrold Leichter
| > | means that some entity is supposed to "trust" the kernel (what
| > | else?). If two entities, who do not completely trust each other, are
| > | supposed to both "trust" such a kernel, something very very fishy is
| > | going on.
| >
| > Why?  If I'm going to use a time-shared machine, I have to trust that the
| > OS will keep me protected from other users of the machine.  All the other
| > users have the same demands.  The owner of the machine has similar
| > demands.
|
| I used to run a commercial time-sharing mainframe in the 1970's.
| Jerrold's wrong.  The owner of the machine has desires (what he calls
| "demands") different than those of the users.
You're confusing policy with mechanism.  Both sides have the same notion of
what a trusted mechanism would be (not that it's clear, even today, that we
could actually implement such a thing).  They do, indeed, have different
demands on the policy.

| The users, for example, want to be charged fairly; the owner may not.
| We charged every user for their CPU time, but only for the fraction that
| they actually used.  In a given second, we might charge eight users
| for different parts of that fraction.
|
| Suppose we charged those eight users amounts that added up to 1.3
| seconds?  How would they know?  We'd increase our prices by 30%, in
| effect, by charging for 1.3 seconds of CPU for every one second that
| was really expended.  Each user would just assume that they'd gotten a
| larger fraction of the CPU than they expected.  If we were tricky
| enough, we'd do this in a way that never charged a single user for
| more than one second per second.  Two users would then have to collude
| to notice that they together had been charged for more than a second
| per second.
The system owner's policy is that each user be charged for *at least as much
time* as he used.

The individual user's policy is that they be charged for *no more than* the
time he used.

A system trusted by both sides would satisfy both constraints (or report that
they were unsatisfiable).  In this case, a trusted system would charge each
user for exactly the time he used!

| ...The users had to trust us to keep our accounting and pricing fair.
| System security mechanisms that kept one user's files from access by
| another could not do this.  It required actual trust, since the users
| didn't have access to the data required to check up on us (our entire
| billing logs, and our accounting software).
Exactly:  *You* had a system you trusted.  Your users had to rely on you.

The situation you describe is hardly new!  Auditing procedures have been
developed over hundreds of years in order to give someone like the user of
your system reason to believe that he is being treated correctly.  Like
everything else in the real world, they are imperfect.  But commerce couldn't
exist unless they worked "well enough".  (I once dealt with a vendor support
charge of around $50K.  It was sent with no documentation - just the bald
assertion that we owed that money.  It tooks weeks to get the vendor to send
their phone conversion and back-room work logs.  They contained all kinds of
interesting things - like a charge for a full 8-hour day to send a 5-line
email message.  Yes, *send* it - the logs showed it had been researched and
written the day before.  We eventually agreed to pay about half the billed
amount.  In another case, someone I know audited the books - kept by a very
large reseller - on which royalties for a re-sold piece of software were
based.  The reseller claimed that no one ever bother to check their books - it
was a waste of time.  Well... there were *tons* of "random" errors - which
just by chance all happened to favor the reseller - who ended up writing a
large check.)

| TCPA is being built specifically at the behest of Hollywood [various
| evidence].
I'm not defending TCPA.  I'm saying that many of the attacks against it
miss the point:  That the instant you have a truely trustable secure kernel,
no matter how it came about, you've opened the door for exactly the kinds of
usage scenarios that you object to in TCPA.

Let's think about what a trusted kernel must provide.  All it does is enforce
an access control matrix in a trusted fashion:  There are a set of objects
each of which has an associated set of operations (read, write, delete,
execute, whatever), a set of subjects, and a mapping from an object/subject
pair to a subset of the operations on the object.  "Subjects" are not just
people - a big lesson of viruses is that I don't necessarily want to grant
all my access rights to every program I run.  When I run a compiler, it
should be able to write object files; when I run the mail program it probably
should not.  (You can identify individual programs with individual "virtual
users" and implement things setuid-style, but that's just an implementation
technique - what you're doing is making the program the subject.)

Subjects have to be able to treat each other with suspicion. 

Re: Super-Encryption

2003-12-18 Thread
Ben, Amir, et.al.

I see that cipher1 has no transparent value. Therefore, the XML-Encrypted 
message see ( http://www.w3.org/TR/xmlenc-core/ ) must transport

(1) symmetric_IV
(2) Sign_RSA_Receiver_PK(symmetric_Key)
(3) cipher
(4) Sign_RSA_Sender(SHA1(message))

This is clearly more concise.  If this is an accurate representation of what 
both of you believe is required for the transport?

Thx,

-Matt






-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Super-Encryption

2003-12-18 Thread Amir Herzberg
Matt, in your note below you explained finally what you really want: a 
secure combination of encryption and signature. I explain below why your 
current scheme is insecure. There are simple secure designs. With Yitchak 
Gertner, a student, we recently proved security of one such practical 
design, which is essentially Sign_sender ( Encrypt_RSA_receiver (message), 
PublicKey_receiver). I've just uploaded the draft journal version to 
http://eprint.iacr.org/ so you should be able to see it in a day or so 
(comments very welcome as I want to finish and submit it to journal soon). 
Your scheme is using the Sign-then-Encrypt order, which may not be secure 
for some (possibly weird) encryption schemes, although may be secure 
standard schemes such as RSA (but I don't think this was proven yet). But 
as I mentioned, your specific scheme is definitely insecure, details below.

Best regards,

Amir Herzberg
Computer Science Department, Bar Ilan University
Lectures: http://www.cs.biu.ac.il/~herzbea/book.html
Homepage: http://amir.herzberg.name
At 16:25 15/12/2003,  Matt wrote:
Quoting Ben Laurie <[EMAIL PROTECTED]>:

> I don't see any value added by cipher1 - what's the point?

The message is encrypted, i.e, cipher1, then cipher1 is encrypted yeilding
cipher2.
Since symmetric_key1 of cipher1 is RSA_Encrypt(sender's private key), access
to sender's public key can decrypt cipher1(must be *this* sender).
So (as I said before...) here you are trying to authenticate the sender - 
using RSA to sign, not to encrypt. I again suggest you use the right 
terminology i.e. call is a signature (since this is what you do!). Notice 
also that for secure usage, there may be some small differences in the 
preprocessing for encryption vs. for signature with RSA.

And I believe you didn't understand (and answer) Ben's point. I'll try to 
clarify. Here's the sequence you suggested:
(1) Encrypt message with symmetric key algorithm, i.e., cipher1
That's the part that appears unnecessary.

(2) RSA_Encrypt (SHA1(message) + symmetric key) with sender's RSA private key
This is where you actually mean RSA_Sign. And why sign `symmetric key` ? 
Since the attacker will know this key, why not simply sign just `message` 
(or SHA1(message) if you want... normally we consider hashing as part of 
the signature process, i.e. I'll prefer to write Sign_RSA_SHA1(message)).

(3) Encrypt cipher1 with symmetric key algorithm, i.e., cipher2
So here one would expect you to simply encrypt the message, i.e. 
cipher2=AES_symmkey2(message); Ben asks (and I agree), what gain do you 
have by encrypting cipher2? As I mentioned, one could hope for gain in 
confidentiality, e.g. when using different encryption schemes, but not when 
the first key (`symmetric key`) is revealed as in your protocol...

(4) RSA_Encrypt (symmetric key2) with receiver's RSA public key
So steps 3 and 4 are `classical` hybrid encryption of cipher1. Hybrid 
encryption is a standard technique so I prefer to write it as one step, 
i.e. Encrypt_RSA_AES_receiverPK(message), or in your 
case,  Encrypt_RSA_AES_receiverPK(cipher1).

(5) Send super-encrypted message
Woops! what does the message consist of ? It is quite clear from the steps 
above and below (omitted), that it contains both 
cipher2=Encrypt_RSA_AES_receiverPK(cipher1) and also RSA_Sign_sender 
(SHA1(message) + symmetric key). Now this is a good example of why it is 
good NOT to confuse signature with encryption... since it is quite obvious 
that the attacker gets to see SHA1(message). This is not desirable. First 
of all, SHA1 (or any hash) is not required to preserve complete 
confidentiality (e.g. in the standard, the goal is only stated as being a 
one-way function). So attacker may be able to learn something about message 
from SHA1(message). As a practical example, suppose there is a very small 
set of possible messages, say `buy XXX for YYY' where XXX, YYY are from 
relatively small sets. Then clearly the attacker can simply compute 
SHA1(m') for all possible messages m' and identify the message sent. So 
this is a real possible exposure to confidentiality.

Also, you wrote...

Since symmetric_key2 of cipher2 is RSA_Encrypt(receiver's public key), only
the receiver can decrypt cipher2.
As was pointed out to me, the process of decrypting cipher2, yields an
encrypted message, i.e., cipher1, that can forwarded on behalf of the 
original
sender. This is not necessarily undesirable.  However, SHA1(message) is to
ensure that cipher1 has not be altered in transport.  Therefore, the receiver
knows three items.
(1) The sender who originated the message.
(2) The receiver is the intended receiver.
This (item 2) is not correct. The message may have been sent to Eve (and 
encrypted using Eve's public key), but then Eve re-encrypted symmetric key 
2 (or cipher1 itself) with the public key of another party say Bob, and 
sent the message to Bob; Bob has no way of knowing that the intended 
receiver (by the original sender) was Eve.

(3) The messag

Re: Super-Encryption

2003-12-18 Thread Ben Laurie
[EMAIL PROTECTED] wrote:

Quoting Ben Laurie <[EMAIL PROTECTED]>:

 

Yes, but you could know all this from cipher2 and RSA of SHA1(message), 
so I still don't see what value is added by cipher1.


Without cipher1, implying (iv1, RSA(SHA1(message) || key1)) it is impossible 
to determine the originator of the message.
Eh? If you have RSA(SHA1(message)) then decrypting that and checking the 
hash matches confirms the originator.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-18 Thread David Wagner
Jerrold Leichter  wrote:
>We've met the enemy, and he is us.  *Any* secure computing kernel that can do
>the kinds of things we want out of secure computing kernels, can also do the
>kinds of things we *don't* want out of secure computing kernels.

I don't understand why you say that.  You can build perfectly good
secure computing kernels that don't contain any support for remote
attribution.  It's all about who has control, isn't it?

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and other forms of trust

2003-12-18 Thread John Gilmore
> | means that some entity is supposed to "trust" the kernel (what else?). If
> | two entities, who do not completely trust each other, are supposed to both
> | "trust" such a kernel, something very very fishy is going on.
>
> Why?  If I'm going to use a time-shared machine, I have to trust that the
> OS will keep me protected from other users of the machine.  All the other
> users have the same demands.  The owner of the machine has similar demands.

I used to run a commercial time-sharing mainframe in the 1970's.
Jerrold's wrong.  The owner of the machine has desires (what he calls
"demands") different than those of the users.

The users, for example, want to be charged fairly; the owner may not.
We charged every user for their CPU time, but only for the fraction that
they actually used.  In a given second, we might charge eight users
for different parts of that fraction.

Suppose we charged those eight users amounts that added up to 1.3
seconds?  How would they know?  We'd increase our prices by 30%, in
effect, by charging for 1.3 seconds of CPU for every one second that
was really expended.  Each user would just assume that they'd gotten a
larger fraction of the CPU than they expected.  If we were tricky
enough, we'd do this in a way that never charged a single user for
more than one second per second.  Two users would then have to collude
to notice that they together had been charged for more than a second
per second.

(Our CPU pricing was actually hard to manage as we shifted the load
among different mainframes that ran different applications at
different multiples of the speed of the previous mainframe.  E.g. our
Amdahl 470/V6 price for a CPU second might be 1.78x the price on an
IBM 370/158.  A user's bill might go up or down from running the same
calculation on the same data, based on whether their instruction
sequences ran more efficiently or less efficiently than average on the
new CPU.  And of course if our changed "average" price was slightly
different than the actual CPU performance, this provided a way to
cheat on our prices.

Our CPU accounting also changed when we improved the OS's timer
management, so it could record finer fractions of seconds.  On average,
this made the system fairer.  But your application might suffer, if its
pattern of context switches had been undercharged by the old algorithm.)

The users had to trust us to keep our accounting and pricing fair.
System security mechanisms that kept one user's files from access by
another could not do this.  It required actual trust, since the users
didn't have access to the data required to check up on us (our entire
billing logs, and our accounting software).

TCPA is being built specifically at the behest of Hollywood.  It is
built around protecting "content" from "subscribers" for the benefit
of a "service provider".  I know this because I read, and kept, all
the early public design documents, such as the white paper

  http://www.trustedcomputing.org/docs/TCPA_first_WP.pdf

(This is no longer available from the web site, but I have a copy.)
It says, on page 7-8:

  The following usage scenarios briefly illustrate the benefits of TCPA
  compliance.

  Scenario I: Remote Attestation

  TCPA remote attestation allows an application (the "challenger") to
  trust a remote platform. This trust is built by obtaining integrity
  metrics for the remote platform, securely storing these metrics and
  then ensuring that the reporting of the metrics is secure.

  For example, before making content available to a subscriber, it is
  likely that a service provider will need to know that the remote
  platform is trustworthy. The service provider's platform (the
  "challenger") queries the remote platform. During system boot, the
  challenged platform creates a cryptographic hash of the system BIOS,
  using an algorithm to create a statistically unique identifier for the
  platform. The integrity metrics are then stored.

  When it receives the query from the challenger, the remote platform
  responds by digitally signing and then sending the integrity
  metrics. The digital signature prevents tampering and allows the
  challenger to verify the signature. If the signature is verified, the
  challenger can then determine whether the identity metrics are
  trustworthy. If so, the challenger, in this case the service provider,
  can then deliver the content. It is important to note that the TCPA
  process does not make judgments regarding the integrity metrics. It
  merely reports the metrics and lets the challenger make the final
  decision regarding the trustworthiness of the remote platform.
 
They eventually censored out all the sample application scenarios like
DRM'd online music, and ramped up the level of jargon significantly,
so that nobody reading it can tell what it's for any more.  Now all
the documents available at that site go on for pages and pages saying
things like "FIA_UAU.1 Timing of authentication. Hierarchical to: No
other compone

FC'04: Call for Participation

2003-12-18 Thread Hinde ten Berge
Financial Cryptography '04
9-12 February 2004
  Key West, Florida, USA


  Call for Participation

Financial Cryptography is the premier international
forum for education, exploration, and debate at the
heart of one theme: Money and trust in the digital
world. Dedicated to the relationship between cryptography
and data security and cutting-edge financial and payment
technologies and trends, the conference brings together
top data-security specialists and scientists with
economists, bankers, implementers, and policy makers. 

Financial Cryptography includes a program of invited
talks, academic presentations, technical demonstrations,
and panel discussions. These explore a range of topics
in their full technical and interdisciplinary complexity:
Emerging financial instruments and trends, legal
regulation of financial technologies and privacy issues,
encryption and authentication techologies, digital cash,
and smartcard payment systems -- among many others. 

The conference proceedings containing all accepted
submissions will be published in the Springer-Verlag
Lecture Notes in Computer Science (LNCS) series after
the conference. A pre-proceedings containing preliminary
versions of the papers will be distributed at the
conference.

More information on the invited speakers is available
on the web site, as well as the list of accepted papers
and the preliminary schedule (see below as well).

Registration for Financial Cryptography 2004 is now open;
details and online registration can be found at
http://fc04.ifca.ai along with information about
discounted hotel accommodation and travel.

Financial Cryptography is organized by the International
Financial Cryptography Association (IFCA). More
information can be obtained from the IFCA web site at
http://www.ifca.ai or by contacting the conference
general chair, Hinde ten Berge, at [EMAIL PROTECTED]



Financial Cryptography '04
   Preliminary Schedule

   
Sunday February 8

[tba] Registration and Welcome Reception


Monday February 9

08:45-09:00 Opening Remarks

09:00-10:00 Keynote Speaker: Jack Selby 

10:00-11:00 Keynote Speaker: Ron Rivest

11:00-11:30 Coffee Break

11:30-12:30 Loyalty and Micropayment Systems

Microcredits for Verifiable Foreign Service 
Provider Metering
Craig Gentry and Zulfikar Ramzan

A Privacy-Friendly Loyalty System Based on Discrete 
Logarithms over Elliptic Curves
Matthias Enzmann, Marc Fischlin, and Markus Schneider

12:30-14:00 Lunch

14:00-15:00 User Authentication

Addressing Online Dictionary Attacks with Login 
Histories and Humans-in-the-Loop
S. Stubblebine and P.C. van Oorschot

Call Center Customer Verification by Query-Directed 
Passwords
Lawrence O’Gorman, Smit Begga, and John Bentley


Tuesday February 10


09:00-10:00 Keynote Speaker: Jacques Stern 
(Session Chair: Moti Yung)

10:00-11:00 Keynote Speaker: Simon Pugh 
(Session Chair: Moti Yung)

11:00-11:30 Coffee Break

11:30-12:30 E-voting 
(Session Chair: Helger Lipmaa)

The Vector-Ballot E-Voting Approach
Aggelos Kiayias and Moti Yung

Efficient Maximal Privacy in Voting and Anonymous 
Broadcast
Jens Groth

12:30-14:00 Lunch

14:00-15:00 Panel: Building Usable Security Systems
Moderator: Andrew Patrick

Usability and Acceptablity of Biometric Security
Systems
Andrew Patrick, National Research Council of Canada

Risk Perception Failures in Computer Security
L. Jean Camp, Harvard University

Visualization Tools for Security Administrators
Bill Yurcik, NCSA, Univeristy of Illinois

20:00-21:00 General meeting

21:00-  Rump session


Wednesday February 11

09:00-10:00 Keynote Speaker: Jon Peha

10:00-10:30 Coffee Break

10:30-12:30 Auctions and Lotteries 
(Session Chair: Roger Dingledine)

Interleaving Cryptography and Mechanism Design: The
Case of Online Auctions
Edith Elkind and Helger Lipmaa

Secure Generalized Vickrey Auction without Third-Party 
Servers
Makoto Yokoo and Koutarou Suzuki

Electronic National Lotteries
Elisavet Konstantinou, Vasiliki Liagokou, Paul
Spirakis, Yannis C. Stamatiou, and Moti Yung

Identity-based Chameleon Hash and Applications
Giuseppe Ateniese and Breno de Medeiros

12:30-14:00 Lunch


Thursday February 12

09:00-10:30 Game Theoretic and Cryptographic Tools

Selecting Correlated Random Actions
Vanessa Teague

An Efficient and Usable Multi-Show Non-Transferable 
Anonymous Credential System
Pino Persiano and Ivan Visconti

The Ephemeral Pairing Problem
Jaap-Henk Hoepman

10:30-11:00 Coffee Break

11:00-13:00 Mix Networks and Anonymous Communications 
(Session Chair: Masayuki Abe)

Mi

Re: Super-Encryption

2003-12-18 Thread
Quoting Ben Laurie <[EMAIL PROTECTED]>:

 
> Yes, but you could know all this from cipher2 and RSA of SHA1(message), 
> so I still don't see what value is added by cipher1.

Without cipher1, implying (iv1, RSA(SHA1(message) || key1)) it is impossible 
to determine the originator of the message. Remember, I'm thinking in terms of 
XML-ENC where cipher1 is represented by 

..

and these scoping elements will be encrypted to the cipher2 result which is 
represented by another  and friends.

Thx,

-Matt

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Super-Encryption

2003-12-18 Thread Ben Laurie
[EMAIL PROTECTED] wrote:
Quoting Ben Laurie <[EMAIL PROTECTED]>:


I don't see any value added by cipher1 - what's the point?


The message is encrypted, i.e, cipher1, then cipher1 is encrypted yeilding 
cipher2.

Since symmetric_key1 of cipher1 is RSA_Encrypt(sender's private key), access 
to sender's public key can decrypt cipher1(must be *this* sender).

Since symmetric_key2 of cipher2 is RSA_Encrypt(receiver's public key), only 
the receiver can decrypt cipher2.

As was pointed out to me, the process of decrypting cipher2, yields an 
encrypted message, i.e., cipher1, that can forwarded on behalf of the original 
sender. This is not necessarily undesirable.  However, SHA1(message) is to 
ensure that cipher1 has not be altered in transport.  Therefore, the receiver 
knows three items.
(1) The sender who originated the message.
(2) The receiver is the intended receiver.
(3) The message was not altered during transport.
Yes, but you could know all this from cipher2 and RSA of SHA1(message), 
so I still don't see what value is added by cipher1.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/
"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Financial Cryptography '04 - accepted papers

2003-12-18 Thread Ian Grigg
The Financial Cryptography 2004 conference has quietly (!)
announced their accepted papers:

http://fc04.ifca.ai/program.htm

Read on for the full programme...



   Accepted Papers

 

 The Ephemeral Pairing Problem
 Jaap-Henk Hoepman

 Efficient Maximal Privacy in Voting and Anonymous Broadcast
 Jens Groth

 Practical Anonymity for the Masses with MorphMix
 Marc Rennhard and Bernhard Plattner

 Call Center Customer Verification by Query-Directed Passwords
 Lawrence O'Gorman, Smit Begga, and John Bentley

 A Privacy-Friendly Loyalty System Based on Discrete Logarithms
 over Elliptic Curves
 Matthias Enzmann, Marc Fischlin, and Markus Schneider

 Identity-based Chameleon Hash and Applications
 Giuseppe Ateniese and Breno de Medeiros

 Selecting Correlated Random Actions
 Vanessa Teague

 Addressing Online Dictionary Attacks with Login Histories
 and Humans-in-the-Loop
 S. Stubblebine and P.C. van Oorschot

 An Efficient and Usable Multi-Show Non-Transferable Anonymous
 Credential System
 Pino Persiano and Ivan Visconti

 Electronic National Lotteries
 Elisavet Konstantinou, Vasiliki Liagokou, Paul Spirakis,
 Yannis C. Stamatiou, and Moti Yung

 Mixminion: Strong Anonymity for Financial Cryptography
 Nick Matthewson and Roger Dingledine

 Interleaving Cryptography and Mechanism Design: The Case of
 Online Auctions
 Edith Elkind and Helger Lipmaa

 The Vector-Ballot E-Voting Approach
 Aggelos Kiayias and Moti Yung

 Microcredits for Verifiable Foreign Service Provider Metering
 Craig Gentry and Zulfikar Ramzan

 Stopping Timing Attacks in Low-Latency Mix-Based Systems
 Brian N. Levine, Michael K. Reiter, and Chenxi Wang

 Secure Generalized Vickrey Auction without Third-Party Servers
 Makoto Yokoo and Koutarou Suzuki

 Provable Unlinkability Against Traffic Analysis
 Ron Berman, Amos Fiat, and Amnon Ta-Shma

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Quantum Crypto

2003-12-18 Thread Perry E . Metzger

There have been more press releases about quantum crypto products
lately.

I will summarize my opinion simply -- even if they can do what is
advertised, they aren't very useful. They only provide link security,
and at extremely high cost. You can easily just run AES+HMAC on all
the bits crossing a line and get what is for all practical purposes
similar security, at a fraction of the price.

The problem in security is not that we don't have crypto technologies
that are good enough -- our algorithms are fine. Our real problem is
in much more practical things like getting our software to high enough
assurance levels, architectural flaws in our systems, etc.

Thus, Quantum Crypto ends up being a very high priced way to solve
problems that we don't have.


Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example:secure computing kernel needed)

2003-12-18 Thread Ian Grigg
Stefan Lucks wrote:
> 
> On Mon, 15 Dec 2003, Jerrold Leichter wrote:
> 
> > | This is quite an advantage of smart cards.
> > However, this advantage is there only because there are so few smart cards,
> > and so few smart card enabled applications, around.
> 
> Strangely enough, Carl Ellison assumed that you would have at most one
> smart card, anyway. I'd rather think you are right, here.


In the late nineties, the smart card world
worked out that each smart card was so expensive,
it would only work if the issuer could do multiple
apps on each card.  That is, if they could share
the cost with different uses (or users).

This resulted in a big shift to multi-application
cards, and a lot of expensive reworking and a lot
of hype. All the smart card people were rushing
to present their own architecture;  all the user
banks were rushing to port their apps back into
these environments, and scratching their heads
to come up with App #2 (access control, loyalty...)

But what they seemed to miss was the stellar mental
gap between the smart card and the PC.  On the PC,
a user can install an app if she so choses.  On
the smart card, it was a proprietary system, and
no smart card provider (the institution was the
real owner) was going to let anybody else play.

But, they believed and behaved as if others could
play.

As many suggest, the starting point is "who owns
the card/trusted module."  From that point, you
can predict what will happen.  If it is not the
end user, then a lot of expensive spinning will
eventually be thrown out.  Using an institutional
model, and today's understanding of PCs, it is
fairly easy to predict that TCPA hardware will
fail unless you can find someone to pay for the
entire rollout, and use it for one purpose with
which they are happy with.  And, it won't take
over the market unless you can solve the issue
of un-TCPA'd hardware.

The cost of the barriers created is daunting, and
generally outweighs any security gained.  There
is a reason why the PC slayed the mainframe -
transaction costs.

Dragging this back to crypto.  Sun recently set up
their Java crypto environment ("JCE") with special
"signed providers."  They then added International
Policy Files.  Even though Sun (for free and
without complaint) gives away sigs on signing
keys and allows anyone to download and install
the International Policy Files, the process has
slowed to a crawl.  The Number 1 Bug in Java
crypto is the lack of the Sun Policy Files [2].
And, the resultant effect will of course be more
and more insecure apps...

Exerting control has huge costs.  There had
better be a huge reason [3].  This is rarely
the case;  and almost all security concerns can
be done by being more careful in software, or by
fudged by bypassing the weaknesses with external
tokens.

iang

[1] We looked at putting gold currencies alongside
national currencies on smart cards in 1999 or so,
and found that not only could we not do this, we
could not even get access to the development kits,
the people, nor the smart cards.  It was an
institutional brick wall.  Part of the brick wall
was the doorway that permitted only institutional-
sized players through.  Nowadays, the gold currencies
do more transactions in a day than many smart card
monies did in a year.

[2] I don't blame Sun for this.  I blame us
cryptoplumbers for not recognising the bait
and switch.  At some point we'll ditch the
Sun architecture and get back to free crypto.

[3] Other examples of institutional plays that
stumbled over the transaction costs issue:
cellular phone apps that can now be installed
by the users, and CA-PKI model / HTTPS servers,
where some nominal security was available if
you paid for a cert.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Stefan Lucks
On Mon, 15 Dec 2003, Jerrold Leichter wrote:

> | This is quite an advantage of smart cards.
> However, this advantage is there only because there are so few smart cards,
> and so few smart card enabled applications, around.

Strangely enough, Carl Ellison assumed that you would have at most one
smart card, anyway. I'd rather think you are right, here.

> Really secure mail *should* use its own smart card.  When I do banking, do
> I have to remove my mail smart card?  Encryption of files on my PC should
> be based on a smart card.  Do I have to pull that one out?  Does that mean
> I can't look at my own records while I'm talking to my bank?  If I can only
> have one smart card in my PC at a time, does that mean I can *never* cut and
> paste between my own records and my on-line bank statement?  To access my
> files and my employer's email system, do I have to have to trust a single
> smart card to hold both sets of secrets?

I agree with you: A good compromise between security and convenience is an
issue, when you are changing between different smart cards. E.g., I could
imagine using the smart card *once* when logging into my bank account,
and then only needing it, perhaps, to authorise a money transfer.

This is a difficult user interface issue, but something we should be able
to solve.

One problem of TCPA is the opposite user interface issue -- the user has
lost control over what is going on. (And I believe that this originates
much of the resistance against TCPA.)

> Ultimately, to be useful a trusted kernel has to be multi-purpose, for
> exactly the same reason we want a general-purpose PC, not a whole bunch
> of fixed- function appliances.  Whether this multi-purpose kernel will
> be inside the PC, or a separate unit I can unplug and take with me, is a
> separate issue. Give the current model for PC's, a separate "key" is
> probably a better approach.

Agreed!

> However, there are already experiments with "PC in my pocket" designs:
> A small box with the CPU, memory, and disk, which can be connect to a
> small screen to replace a palmtop, or into a unit with a big screen, a
> keyboard, etc., to become my desktop.  Since that small box would have
> all my data, it might make sense for it to have the trusted kernel.
> (Of course, I probably want *some* part to be separate to render the box
> useless is stolen.)

Agreed again!

> | There is nothing wrong with the idea of a trusted kernel, but "trusted"
> | means that some entity is supposed to "trust" the kernel (what else?). If
> | two entities, who do not completely trust each other, are supposed to both
> | "trust" such a kernel, something very very fishy is going on.
> Why?  If I'm going to use a time-shared machine, I have to trust that the
> OS will keep me protected from other users of the machine.  All the other
> users have the same demands.  The owner of the machine has similar demands.

Actually, all users have to trust the owner (or rather the sysadmin).

The key words are "have to trust"! As you wrote somewhere below:

> Part of the issue with TCPA is that the providers of the kernel that we
> are all supposed to trust blindly are also going to be among those who
> will use it heavily.  Given who those producers are, that level of trust
> is unjustifiable.

I entirely agree with you!

> | More than ten years ago, Chaum and Pedersen

[...]

> |+---+ +-+ +---+
> || Outside World | <-> | Your PC | <-> | TCPA-Observer |
> |+---+ +-+ +---+
> |
> | TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and is
> | thus open to abuse.

> I remember looking at this paper when it first appeared, but the details
> have long faded.  It's an alternative mechanism for creating trust:
> Instead of trusting an open, independently-produced, verified
> implementation, it uses cryptography to construct walls around a
> proprietary, non-open implementation that you have no reason to trust.

Please re-read the paper!

First, it is not a mechanism for *creating* trust.

It is rather a trust-avoidance mechanism! You are not trusting the
observer at all, and you don't need to. The outsider is not trusting you
or your PC at all, and she donesn't need to.

Second, how on earth did you get the impression that Chaum/Pedersen is
about proprietary non open implenentations?

Nothing stops people from producing independent and verified
implementations. As a matter of fact, since people can concentrate on
writing independent and verified implementations for the sofware on "Your
PC", providing an independently produced and verified implementation woud
be much much simpler than ever providing such an implementation for the
TCPA hardware.

Independent implementations of the observer's soft- and hardware are
simpler than in the case of TCPA as well, but this is a minor issue. You
don't need to trust the observer, so you don't care about independent a

RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Stefan Lucks
On Mon, 15 Dec 2003, Carl Ellison wrote:

[I wrote]
> > The first difference is obvious. You can plug in and later
> > remove a smart
> > card at your will, at the point of your choice. Thus, for
> > home banking with
> > bank X, you may use a smart card, for home banking with bank Y you
> > disconnect the smart card for X and use another one, and before online
> > gambling you make sure that none of your banking smart cards
> > is connected
> > to your PC. With TCPA, you have much less control over the
> > kind of stuff
> > you are using.
> >
> > This is quite an advantage of smart cards.
>
> It is an advantage for a TCPA-equipped platform, IMHO.  Smart cards cost
> money. Therefore, I am likely to have at most 1.

Strange! Currently, I have three smart cards in my wallet, which I did not
want to own and which I did never pay for. I never used any of them. They
are packaged with some ATM cards (using conventional magnetic-stripe
technoloby) and implement a "Geldkarte".  (Since a couple of years, German
banks try to push their customers into using the "Geldkarte" for
electronig money, by packaging the smart cards together with ATM cards.
For me, there ares still too few dealers accepting the "Geldkarte", so I
never use it.)

OK, the banks are paying for the smart cards they give to their customers
for free. But they would not do so, if these cards where expensive.

BTW, even if you have only one, a smart card has the advantage that you
can physically remove it.

> TCPA acts like my hardware crypto module and in that one hardware
> module, I am able to create and maintain as many private keys as I want.
> (The keys are wrapped by the TPM but stored on the disk - even backed up
> - so there's no storage limit.)

A smart card can do the same.

> Granted, you have to make sure that the S/W that switches among (selects)
> private keys for particular uses does so in a way you can trust.  The
> smartcard has the advantage of being a physical object.

Exact!

> However, if you can't trust your system S/W, then how do you know that
> the system S/W was using a private key on the smart card you so happily
> plugged in rather than one of its own (the same one all the time)?

The point is that Your system is not supposed to prevent You from doing
anything I want you not to do! TCPA is supposed to lock You out of some
parts of Your system.


[...]
> If it were my machine, I would never do remote attestation.  With that
> one choice, I get to reap the personal advantages of the TPM while
> disabling its behaviors that you find objectionable (serving the outside
> master).

I am not sure, whether I fully understand you. If you mean that TCPA
comes with the option to run a secure kernel where you (as the owner and
physical holder of the machine running) have full control over what the
system is doing and isn't doing -- ok, that is a nice thing. On the other
hand, we would not need a monster such as TCPA for this.


> Of course, you're throwing out a significant baby with that bath water.
> What if it's your secrets you want protected on my machine?  It doesn't
> have to be the RIAA's secrets.  Do you object just as much when it's
> your secrets?

Feel free to buy an overpriced second-hand car from me. I promise, I won't
object! ;-)

But seriously, with or without remote attestation, I would not consider my
secrets safe on your machine. If you can read (or play, in the case of
multimedia stuff) my secrets on your machine, I can't prevent you from
copying. TCPA (and remote attestation) can make copying less convenient
for you (and the RIAA is probably pleased with this), but can't stop a
determined adversary.

In other words, I don't think it will be too difficult to tamper with the
TCPA hardware, and to circumwent remote attestation.

Winning against the laws of information theory is not simpler than winning
against the laws of thermodynamics -- both are impossible!

> > Chaum and Pedersen  [...]

> > TCPA mixes "Your PC" and the "observer" into one "trusted kernel" and
> > is thus open to abuse.

Let me stress:

  -- Good security design means to separate duties.

  -- TCPA does exactly the opposite: It deliberately mixes duties.

> I haven't read that paper - will have to.  Thanks for the reference.
> However, when I do read it, what I will look for is the non-network
> channel between the observer and the PC.  Somehow, the observer needs to
> know that the PC has not been tampered with and needs to know, securely
> (through physical security) the state of that PC and its S/W.

It doesn't need to know, and it can't know anyway. All it needs to know
whether itself has been tampered with. Well, ... hm ... it more or less
assumes itself has not been tampered with (but so does the TCPA hardware).

> Where can I get one of those observers?  I want one now.

You get them at the same place where you get TCPA hardware: In a possible
future. ;-)


-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
 

RE: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Carl Ellison
Stefan,

I have to disagree on most of these points.

See below.

 - Carl

+--+
|Carl M. Ellison [EMAIL PROTECTED]  http://theworld.com/~cme |
|PGP: 75C5 1814 C3E3 AAA7 3F31  47B9 73F1 7E3C 96E7 2B71   |
+---Officer, arrest that man. He's whistling a copyrighted song.---+ 

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Stefan Lucks
> Sent: Monday, December 15, 2003 9:34 AM
> To: Jerrold Leichter
> Cc: Ian Grigg; Paul A.S. Ward; [EMAIL PROTECTED]
> Subject: Difference between TCPA-Hardware and a smart card 
> (was: example: secure computing kernel needed)
> 
> On Sun, 14 Dec 2003, Jerrold Leichter wrote:
> 
> > Which brings up the interesting question:  Just why are the 
> reactions to
> > TCPA so strong?  Is it because MS - who no one wants to trust - is
> > involved?  Is it just the pervasiveness:  Not everyone has 
> a smart card,
> > but if TCPA wins out, everyone will have this lump inside of their
> > machine.
> 
> There are two differences between TCPA-hardware and a smart card.
> 
> The first difference is obvious. You can plug in and later 
> remove a smart
> card at your will, at the point of your choice. Thus, for 
> home banking with
> bank X, you may use a smart card, for home banking with bank Y you
> disconnect the smart card for X and use another one, and before online
> gambling you make sure that none of your banking smart cards 
> is connected
> to your PC. With TCPA, you have much less control over the 
> kind of stuff
> you are using.
> 
> This is quite an advantage of smart cards.

It is an advantage for a TCPA-equipped platform, IMHO.  Smart cards cost
money. Therefore, I am likely to have at most 1.  TCPA acts like my hardware
crypto module and in that one hardware module, I am able to create and
maintain as many private keys as I want.  (The keys are wrapped by the TPM
but stored on the disk - even backed up - so there's no storage limit.)

Granted, you have to make sure that the S/W that switches among (selects)
private keys for particular uses does so in a way you can trust.  The
smartcard has the advantage of being a physical object. However, if you
can't trust your system S/W, then how do you know that the system S/W was
using a private key on the smart card you so happily plugged in rather than
one of its own (the same one all the time)?

> 
> The second point is perhaps less obvious, but may be more important.
> Usually, *your* PC hard- and software is supposed to protect *your*
> assets and satisfy *your* security requirements. The 
> "trusted" hardware
> add-on in TCPA is supposed to protect an *outsider's* assets 
> and satisfy
> the *outsider's* security needs -- from you.
> 
> A TCPA-"enhanced" PC is thus the servant of two masters -- 
> your servant
> and the outsider's. Since your hardware connects to the 
> outsider directly,
> you can never be sure whether it works *against* you by giving the
> outsider more information about you than it should (from your point if
> view).

TCPA includes two different things: wrapping or "sealing" of secrets -
something in service to you (and the thing I invoked in the previous
disagreement) - and remote attestation.  You do not need to do remote
attestation to take advantage of the TPM.  If it were my machine, I would
never do remote attestation.  With that one choice, I get to reap the
personal advantages of the TPM while disabling its behaviors that you find
objectionable (serving the outside master).

Of course, you're throwing out a significant baby with that bath water.
What if it's your secrets you want protected on my machine?  It doesn't have
to be the RIAA's secrets.  Do you object just as much when it's your
secrets?

> 
> There is nothing wrong with the idea of a trusted kernel, but 
> "trusted"
> means that some entity is supposed to "trust" the kernel 
> (what else?). If
> two entities, who do not completely trust each other, are 
> supposed to both
> "trust" such a kernel, something very very fishy is going on.
> 
> 
> Can we do better?
> 
> More than ten years ago, Chaum and Pedersen presented a great 
> idea how to
> do such things without potentially compromising your 
> security. Bringing
> their ideas into the context of TCPA, things should look like in the
> following picture
> 
>+---+ +-+ +---+
>| Outside World | <-> | Your PC | <-> | TCPA-Observer |
>+---+ +-+ +---+
> 
> So you can trust "your PC" (possibly with a trusted kernel 
> ... trusted by
> you). And an outsider can trust the observer.
> 
> The point is, the outside world does not directly talk to the 
> observer!
> 
> Chaum and Pedersen (and some more recent authors) defined protocols to
> satisfy the outsider's security needs without giving the outsider any
> chance to learn more about you 

Re: Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)

2003-12-18 Thread Pat Farrell
At 07:02 PM 12/15/2003 -0500, Jerrold Leichter wrote:
However, this advantage is there only because there are so few smart cards,
and so few smart card enabled applications, around.
A software only, networked smart card would solve the
chicken and egg problem. One such solution is
Tamper resistant method and apparatus, [Ellison], USPTO 6,073,237
(Do a patent number search at http://www.uspto.gov/patft/index.html)
Carl invented this as an alternative to Smartcards back in the SET
development days.
Pat

Pat Farrell [EMAIL PROTECTED]
http://www.pfarrell.com
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]