Re: Enterprise Right Management vs. Traditional Encryption Tools

2007-05-14 Thread Jason Holt

On Wed, 9 May 2007, Ali, Saqib wrote:

What about DRM/ERM that uses TPM? With TPM the content is pretty much
tied to a machine (barring screen captures etc)

Will ERM/DRM be ineffective even with the use of TPM?

ERM/DRM/TPM are such poorly defined and implemented products that people have 
started referring to a DRM fairy who people assume will wave her wand and 
solve whatever problem is at hand.  I used to try to draw out the mentioner's 
claims into a concrete proposal that everyone could objectively examine, but 
the conversation rarely progressed that far.  So now I think that, as with 
other crypto proposals, the onus should now be on the proposer to clearly 
delineate what they're proposing and convince us that it's complete and 
correct, rather than us nodding our heads or lashing out at what we assume it 

So I guess the answer to your question is We'd better assume that DRM+TPM 
will be ineffective until we've subjected a specific implementation of it to 
the same level of scrutiny we apply to other cryptosystems, and since DRM+TPM 
proposals tend to be much more complicated than other cryptosystems like SSL, 
that's going to take a very long time.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Can you keep a secret? This encrypted drive can...

2006-11-06 Thread Jason Holt

On Sat, 4 Nov 2006, Ralf Senderek wrote:

On the unencrypted filesystem:

#  time dd if=/dev/zero of=cryptogram bs=1MB count=50
50+0 records in
50+0 records out
5000 bytes (50 MB) copied, 0.216106 seconds, 231 MB/s

sys 0m0.252s

Unless you have a disk array in your laptop, that performance is an artifact 
of buffering.  Here are unbuffered and buffered numbers for my rather new 
desktop machine:

$ hdparm -t /dev/sda

 Timing buffered disk reads:  174 MB in  3.01 seconds =  57.79 MB/sec

$ hdparm -T /dev/sda

 Timing cached reads:   5188 MB in  2.00 seconds = 2595.82 MB/sec

The 25MB/sec number for your encrypted partition looks like it's probably 
right, though:

$ openssl speed aes-256-cbc
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes256 bytes   1024 bytes   8192 bytes
aes-256 cbc  52071.66k55008.98k55609.83k55984.13k55776.36k

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Interesting bit of a quote

2006-07-16 Thread Jason Holt

On Fri, 14 Jul 2006, Travis H. wrote:

Absent other protections, one could simply write a new WORM media with
falsified information.

I can see two ways of dealing with this:

1) Some kind of physical authenticity, such as signing one's name on
the media as they are produced (this assumes the signer is not
corruptible), or applying a frangible difficult-to-duplicate seal of
some kind (this assumes access controls on the seals).
2) Some kind of hash chain covering the contents, combined with
publication of the hashes somewhere where they cannot be altered (e.g.
publish hash periodically in a classified ad in a newspaper).

My MS Thesis was on this topic:

If you store a value with a TTP (say, an auditor), and follow the protocol 
honestly, it's impossible to go back later and falsify records.  The symmetric 
version uses hash chains, and was invented several times before I came along.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Use of TPM chip for RNG?

2006-06-30 Thread Jason Holt

On Thu, 29 Jun 2006, Hal Finney wrote:

A few weeks ago I asked for information on using the increasingly
prevalent built-in TPM chips in computers (especially laptops) as a
random number source.  I got some good advice and want to summarize the
information for the benefit of others.

Thanks for the useful summary!  For the sake of completeness, let me also add 
that RNGs in tamper-proof hardware are potentially rather controversial, since 
there are several known ways to produce output which looks very random to 
anyone who doesn't know some secret, but allows those who do to predict what 
future outputs will be.  I believe one straightforward way to do this would be 
to simply use a symmetric encryption function outputting random data blocks

r_i=Encrypt(key, r_(i-1))

If you don't know the secret key, the output will look at least somewhat 
random, but if you do, you can use any block to predict all subsequent and 
prior ones.  (This topic has been discussed in the literature, and my 
off-the-cuff example may not be particularly strong.)

I believe it's a fair summary to say that hardware RNG is a neat and useful 
feature, but may be unsuitable for the sufficiently paranoid when it comes in 
a tamper-proof package.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Voice phishing

2006-06-29 Thread Jason Holt

Hi-tech fraudsters have begun using recorded telephone messages in a bid to 
trick users into handing over confidential account information. The tactic has 
been adopted as a variant of recently detected phishing attacks targeting 
customers of the Santa Barbara Bank  Trust.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Linux RNG paper

2006-05-04 Thread Jason Holt

On Thu, 04 May 2006 18:14:09 +0200, markus reichelt [EMAIL PROTECTED]

Agreed; but regarding unix systems, I know of none crypto
implementation that does integrity checking. Not just de/encrypt the
data, but verify that the encrypted data has not been tampered with.

There's also ecryptfs:


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Paper summarizing new directions in protecting web users

2006-03-08 Thread Jason Holt

On Mon, 6 Mar 2006, Amir Herzberg wrote:

I've summarized the current directions that our group is working on
towards improving security for web users. I'll probably soon post it as
HTML, but I'm terribly busy and so far just posted it in eCrypt as PDF,
see at


Amir will also be appearing next month in a panel I'm moderating on the 
challenges of practical web security at NIST's PKI conference.  Some of the 
discussions I've seen on this list led to the creation of that panel -- if we 
as cryptographers sometimes have to wrangle over what's considered trustworthy 
website behavior, how are users ever supposed to cope?

The standard flyer for that conference follows:

*** NO ON-SITE REGISTRATION!  Last day to register: March 17 ***

5th Annual PKI RD Workshop at NIST in Gaithersburg, MD
Making Cryptography Easy to Use
April 4-6, 2006

Come join with experts from NIST, NIH, private industry and universities
around the world for our fifth workshop!

Scheduled topics include:

HAS JOHNNY LEARNT TO ENCRYPT BY NOW? Examining the troubled relationship
between a security solution and its users
Angela Sasse, University College London

-How Trust Had a Hole Blown In It.  The Case of X.509 Name Constraints
-Navigating Revocation through Eternal Loops and Land Mines
-Simplifying Credential Management through PAM and Online Certificate
-Identity Federation and Attribute-based Authorization through the Globus
Toolkit, Shibboleth, GridShib, and MyProxy
-PKI Interoperability by an Independent, Trusted Validation Authority
-Achieving Email Security Usability
-CAUDIT PKI Federation - A Higher Education Sector Wide Approach

-NIST Cryptographic Standards Status Report, Bill Burr, NIST
-Trust Infrastructure and DNSSEC Deployment, Allison Mankin, Consultant
-Integrating PKI and Kerberos, Jeffrey Altman, Secure Endpoints Inc.
-Enabling Revocation for Billions of Consumers, Kelvin Yiu, Microsoft

- Digital Signatures (Moderator: David Chadwick, University of Kent)
- Domain Keys Identified Mail (DKIM) (Moderator:  Barry Leiba, IBM)
- Browser Security User Interfaces: Why are web security decisions hard and
what can we do about it?
  (Moderator:  Jason Holt, Brigham Young University)
- Federal PKI Update (Moderator - Peter Alterman, National Institutes of
- Bridge-to-Bridge Interoperations (Moderator - Peter Alterman, National
Institutes of  Health)

WORKS IN PROGRESS (WIP)  (Contact Krishna Sankar ([EMAIL PROTECTED]) if you
have additional WIP topics)
Potential topics:
-  CNRI handle system (brief overview)
-  International Grid Trust Federation

Complete agenda is available at

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: EDP (entropy distribution protocol), userland PRNG design

2006-02-04 Thread Jason Holt

On Sat, 4 Feb 2006, Travis H. wrote:

Suppose that /dev/random is too slow (SHA-1 was never meant to
generate a lot of output) because one of these machines wishes to
generate a large file for use as a one-time pad*.  That leaves
distributing bits.

* /dev/random's output is limited by available entropy, not the speed of sha1. 
You want /dev/urandom instead.

* You're talking about a stream cipher, not a OTP, especially since an 
attacker could see the plaintext over the network and would only need to 
break the cipher to get at the pad

* It's dangerous to offhandedly propose stream ciphers, especially when we 
have some tried and tested ones, and it doesn't really make sense to use them 
as if they were OTPs, since then you get the benefits of neither

* Hash functions are comparably fast to ciphers anyway, and are plenty fast 
for the application you propose:

[EMAIL PROTECTED] ~$ openssl speed sha1
Doing sha1 for 3s on 16 size blocks: 1718543 sha1's in 2.99s
1718543 20 *p

So sha1 generates 34Mbyte/sec, which is enough to saturate a gigabit ethernet 
link in many installations.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: crypto wiki -- good idea, bad idea?

2005-12-13 Thread Jason Holt

On Mon, 12 Dec 2005, Paul Hoffman wrote:

Or should we just stick to wikipedia?  Is it doing a satisfactory job?

Also check out the Cryptography Reader:

Matt Crypto set up an article (to clean up) of the day replete with a bar 
graph of how done he thinks it is.

As to accuracy, there are several authors I respect who keep many of the 
crypto articles on their watchlists, so that we notice when people make 

I'm quite happy with a number of the pages in the reader, enough that I point 
my students to them and use the figures in my lecture slides.  I like the 
intersecting planes in the secret sharing article particularly:

of work. I proposed a few weeks ago (in the meta-discussion) to do it, but 
was concerned that doing so would step on toes and seem invasive. No one has 
responded to that, not even the people who flagged the article as needing 

An old wikipedia saying is be bold in updating pages:


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: another feature RNGs could provide

2005-12-13 Thread Jason Holt

On Mon, 12 Dec 2005, Travis H. wrote:

One thing I haven't seen from a PRNG or HWRNG library or device is an
unpredictable sequence which does not repeat; in other words, a
[cryptographically strong?] permutation.  This could be useful in all

Rich Schroeppel tells me his Hasty Pudding cipher can be used to create PRPs 
(pseudorandom permutations) of arbitrary size.  It even has the ability to let 
you define external functions to help define set membership (for sets which 
aren't just composed of the natural numbers).


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Web Browser Developers Work Together on Security

2005-11-30 Thread Jason Holt

 Core KDE developer George Staikos recently hosted a meeting of the security 
developers from the leading web browsers. The aim was to come up with future 
plans to combat the security risks posed by phishing, ageing encryption 
ciphers and inconsistent SSL Certificate practise. Read on for George's report 
of the plans that will become part of KDE 4's Konqueror and future versions of 
other web browsers.

In the past few years the Internet has seen a rapid growth in phishing 
attacks. There have been many attempts to mitigate these types of attack, but 
they rarely get at the root of them problem: fundamental flaws in Internet 
architecture and browser technology. Throughout this year I had the fortunate 
opportunity to participate in discussions with members of the Internet 
Explorer, Mozilla/FireFox, and Opera development teams with the goal of 
understanding and addressing some of these issues in a co-operative manner.

Our initial and primary focus is, and continues to be, addressing issues in 
PKI as implemented in our web browsers. This involves finding a way to make 
the information presented to the user more meaningful, easier to recognise, 
easier to understand, and perhaps most importantly, finding a way to make a 
distinction for high-impact sites (banks, payment services, auction sites, 
etc) while retaining the accessibility of SSL and identity for smaller 

In Toronto on Thursday November 17, on behalf of KDE and sponsored by my 
company Staikos Computing Services, I hosted a meeting of some of these 
developers. We shared the work we had done in recent months and discussed our 
approaches and strengths and weaknesses. It was a great experience, and the 
response seems to be that we all left feeling confident in our direction 
moving forward. There was strong support for the ideas proposed and I think 
we'll see many of them released in production browsers in the near future. I 
think we were pleasantly surprised to see elements of our own designs in each 
other's software, and it goes to show how powerful our co-operation can be.

The first topic and the easiest to agree upon is the weakening state of 
current crypto standards. With the availability of bot nets and massively 
distributed computing, current encryption standards are showing their age. 
Prompted by Opera, we are moving towards the removal of SSLv2 from our 
browsers. IE will disable SSLv2 in version 7 and it has been completely 
removed in the KDE 4 source tree already.

KDE will furthermore look to remove 40 and 56 bit ciphers, and we will 
continually work toward preferring and enforcing stronger ciphers as testing 
shows that site compatibility is not adversely affected. In addition, we will 
encourage CAs to move toward 2048-bit or stronger keys for all new roots.

These stronger cryptography rules help to protect users from malicious 
cracking attempts. From a non-technical perspective, we will aim to promote, 
encourage, and eventually enforce much stricter procedures for certificate 
signing authorities. Presently all CAs are considered equal in the user agent 
interface, irrespective of their credentials and practices. That is to say, 
they all simply get a padlock display when their issued certificate is 
validated. We believe that with a definition of a new strongly verified 
certificate with a special OID to distinguish it, we can give users a more 
prominent indicator of authentic high-profile sites, in contrast to the 
phishing sites that are becoming so prevalent today. This would be implemented 
with a significant and prominent user-interface indicator in addition to the 
present padlock. No existing certificates would see changes in the browser.

To explain what this will look like, I need to take a step back and explain 
the history of the Konqueror security UI. It was initially modeled after 
Netscape 4, displaying a closed golden padlock in the toolbar when an SSL 
session was initiated and the certificate verification project passed. The 
toolbar is an awful place for this, but consistency is extremely important, 
and during the original development phase of KDE 2.0, this was the only easy 
way to implement what we needed. Eventually we added a mechanism to add icons 
to the status bar and made the status bar a permanent fixture in browser 
windows, preventing malicious sites from spoofing the browser chrome and 
making the security icon more obvious to the user. In the past year a padlock 
and yellow highlight were added to the location bar as an additional 
indication. This was primarily based on FireFox and Opera.

I was initially resistant to the idea of using colour to indicate security - 
especially the colour yellow! However the idea we have discussed have been 
implemented by Microsoft in their IE7 address bar, when I saw it in action I 
was sold. I think we should implement Konqueror the same way for KDE4. It 
involves the following steps:

Re: gonzo cryptography; how would you improve existing cryptosystems?

2005-11-07 Thread Jason Holt

On Fri, 4 Nov 2005, Travis H. wrote:

PS:  There's a paper on cryptanalyzing CFS on my homepage below.  I
got to successfully use classical cryptanalysis on a relatively modern
system!  That is a rare joy.  CFS really needs a re-write, there's no
real good alternatives for cross-platform filesystem encryption to my

Take a look at ecryptfs before rewriting cfs:


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Hooking nym to wikipedia

2005-10-03 Thread Jason Holt

Thanks to everyone who has contributed feedback, cyphrpunk in particular. Here 
are my thoughts on connecting nym to wikipedia.  I'll take feedback here 
first, then approach the WikiMedia folks.

* I believe the best solution would be for wikipedia to do the following:

  - Run an SSL server (optionally using a self-signed cert) which requires
client certificates.  This is a 4-line addition to the httpd.conf.

  - Apache (already) automatically sets an environment variable identifying
the client certificate used.  MediaWiki would map this to a random state
variable equivalent to its IP identifier.

  - When admins wish to block an IP, they follow the usual procedure, which
has a special case for the special identifiers which adds the
corresponding cert to a CRL instead of modifying the IP blacklist.  The
client will no longer be able to connect to the SSL server.

  - Optionally, wikipedia can also send its list of perma-banned IPs to the
(externally operated, but wikipedia-specific) token server, which will
then refuse to serve those IPs.

* Alternatives to this approach involve someone else setting up such an SSL 
server as a reverse proxy for and communicating a special 
identifier to wikipedia along with the proxied data in some combination of 
HTTP headers and cookies. I quite like the simplicity of these approaches, 
which could also allow avoiding certificates entirely by allowing users to 
trade a token directly for a cookie.  But now the header/cookie is subject (on 
the proxy-wikipedia link in particular) to eavesdropping, forgery and all the 
other things SSL is designed to prevent.  So ideally, wikipedia would allow an 
SSL connection from the proxy, and might as well just accept the client certs 
or tokens directly.  Also, if we eliminate certs, tokens would then have to be 
kept around and treated as secrets in case the user needs to get cookies 
issued onto other browsers or refreshed when a browser chooses to delete the 
cookie. Certs, OTOH, have a public cert that can be passed around, and come in 
a standardized file that has browser-supported passphrase en/decryption 

* Incidentally, making my apache-ssl (1.3) server reverse-proxy (impersonate, 
essentially) is ridiculously simple.  In the httpd.conf:

  # Inside the IfModule mod_proxy.c block:
ProxyRequests Off
ProxyPass /
ProxyPassReverse /

And in the modules.conf:
  LoadModule proxy_module /usr/lib/apache/1.3/


(Side note to Damian Gerow: our mail servers refuse to talk to each other; my 
admin claims is reporting its IP as (an 
unroutable IP), which makes a validity test fail.  We'll have to find a side 

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Hooking nym to wikipedia

2005-10-03 Thread Jason Holt

More thoughts regarding the tokens vs. certs decision, and also multi-use:

* Client certs are a pain to turn on and off.  If you select ask me every 
time before sending a client cert, you have to click half a dozen OKs per 
page.  (This could be mitigated by having Wikipedia only use the SSL server 
for edits, since they're not blocking article viewing anyway, just editing.) 
If you tell the browser to send the certificate automatically and then forget 
about it, other SSL sites can silently request it, which is particularly bad 
if you're not using tor just then.

* Using tokens directly at site login time avoids the client cert hassles. 
However, evil web servers could then collect tokens (nyms) for use at other 
sites, suggesting that each server should run its own token server.  But now 
each server has a (potentially short) list of client IPs, whereas a 
centralized token server would provide better concealment.  Obviously, if 
wikipedia is the only site that ever bothers to use nym, this is a moot point.

* Lack of forward secrecy is indeed an issue, since our metaphorical Chinese 
dissident must keep around her cert to continue using it, which if discovered 
links her with all her past activities.  This is a problem even if Wikipedia 
maps each client cert to a particular random value for public display, since 
the attackers can simply use the stolen cert to make an edit on wikipedia and 
then check to see if the identifier comes up the same.

If Wikipedia generates a new random ID for each edit, then attackers have to 
access Wikipedia internals to map the IDs back to the cert, but then, so do 
Wikipedia admins when they want to assess a user's pattern of (bad) behavior. 
Note that SSL does not (IIRC) encrypt certificates, so a passive network 
eavesdropper can associate client certs with the random IDs.  (Do the 
ephemeral modes hide the certs?)

A related approach that thwarts the network eavesdropper would be to issue a 
series of certificates which expire one per interval (hour/day/whatever, 
trading privacy against the hassle of managing lots of certs).  Then our 
dissident uses each cert in turn, securely deleting it after it expires.  The 
CA keeps a list recording all the certs issued to the same user, and when 
Wikipedia wishes to ban a user, the CA revokes all the unexpired certs for 
that user.  The CA also securely deletes expired certs from its lists, so that 
if compromised, it has merely the same list of certs found on the client 
machine, and is likewise devoid of any reference to certs used in prior 

Of course, there are nifty cryptographic solutions to the problem of revoking 
repeat offenders without linking activities of good users.  Private 
Credentials and Idemix are the two best known examples, but both are 
complicated and patent-ridden.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Hooking nym to wikipedia (fwd)

2005-10-03 Thread Jason Holt

-- Forwarded message --
Date: Mon, 3 Oct 2005 08:32:44 -0400
From: Paul Syverson [EMAIL PROTECTED]
Subject: Re: Hooking nym to wikipedia

Hi Jason et al,

On Mon, Oct 03, 2005 at 11:48:48AM +, Jason Holt wrote:

More thoughts regarding the tokens vs. certs decision, and also multi-use:


A related approach that thwarts the network eavesdropper would be to issue
a series of certificates which expire one per interval (hour/day/whatever,
trading privacy against the hassle of managing lots of certs).  Then our
dissident uses each cert in turn, securely deleting it after it expires.
The CA keeps a list recording all the certs issued to the same user, and
when Wikipedia wishes to ban a user, the CA revokes all the unexpired certs
for that user.  The CA also securely deletes expired certs from its lists,
so that if compromised, it has merely the same list of certs found on the
client machine, and is likewise devoid of any reference to certs used in
prior transactions.

Of course, there are nifty cryptographic solutions to the problem of
revoking repeat offenders without linking activities of good users.
Private Credentials and Idemix are the two best known examples, but both
are complicated and patent-ridden.

You might want to have a look at our UST (Unlinkable Serial Transactions)

It was published in ACM TISSEC
(or .ps)

There was also an earlier version published at Financial Crypto.
(It lacks the proofs and some improvements to the protocols)
(or .ps)

1. I think it is much less complicated than the other things you raised,
   but of course has other tradeoffs.

2. The papers are it. There is no current code worth looking at.

3. Thanks for the reminder. It too is patented if not patent-ridden,
   but we should be able to cope with that. Basically you shouldn't put
   huge work in assuming that there are no encumbrances to address, but if
   you are interested given 1 and 2 after you look at it, let me
   know. I can then explain the issues regarding the patent situation.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: nym-0.2 released (fwd)

2005-10-02 Thread Jason Holt

On Sat, 1 Oct 2005, cyphrpunk wrote:

All these degrees of indirection look good on paper but are
problematic in practice.

As the great Ulysses said,

  Pete, the personal rancor reflected in that remark I don't intend to dignify
  with comment. However, I would like to address your attitude of hopeless
  negativism.  Consider the lilies of the g*dd*mn field...or h*ll, look at
  Delmar here as your paradigm of hope!

  [Pause] Delmar: Yeah, look at me.

Okay, so maybe there's no personal rancor, but I do detect some hopeless 
negativism.  Or perhaps it's unwarranted optimism that crypto-utopia will be 
here any moment now, flowing with milk and honey, ecash, infrastructure and 
multi show zero knowledge proofs.  Maybe I just need a disclaimer: Warning: 
this product favors simplicity over crypto-idealism; not for use in Utopia. 
Did I mention that my code is Free and (AFAIK) unencumbered?

The reason I have separate token and cert servers is that I want to end up 
with a client cert that can be used in unmodified browsers and servers.  The 
certs don't have to have personal information in them, but with indirection we 
cheaply get the ability to enfore some sort of structure on the certs. Plus, 
I spent as much time as it took me to write *both releases of nym* just trying 
to get ahold of the actual digest in an X.509 cert that needs to be signed by 
the CA (in order to have the token server sign that instead of a random 
token).  That would have eliminated the separate token/cert steps, but 
required a really hideous issuing process and produced signatures whose form 
the CA could have no control over.  (Clients could get signatures on IOUs, 
delegated CA certs, whatever.)

(Side note to Steve Bellovin: having once again abandoned mortal combat with 
X.509, I retract my comment about the system not being broken...)

the security properties of the system. Hence it makes sense for all of them 
to be run by a single entity. There can of course be multiple independent 
such pseudonym services, each with its own policies.

Sure, there's no reason for one entity not to run all three services; we're 
only talking about 2 CGI scripts and a web proxy anyway.  Or, run a CA which 
serves multiple token servers, and issues certs with extensions specifying 
what kinds of tokens were spent to obtain the cert.  Then web servers get 
articulated limiting from a single CA's certs.

In particular it is not clear that the use of a CA and a client
certificate buys you anything. Why not skip that step and allow the
gateway proxy simply to use tokens as user identifiers? Misbehaving
users get their tokens blacklisted.

It buys not having to strap hacked-up code onto your web browser or server. 
Run the perl scripts once to get the cert, then use it with any browser and 
any server that knows about the CA.

There are two problems with providing client identifiers to Wikipedia.
The first is as discussed elsewhere, that making persistent pseudonyms
such as client identifiers (rather than pure certifications of
complaint-freeness) available to end services like Wikipedia hurts
privacy and is vulnerable to future exposure due to the lack of
forward secrecy.

Great, you guys work up an RFC, then an IETF draft, then some Idemix code with 
all the ZK proofs.  In the meantime, I'll be setting up my 349 lines of 
perl/shell code for whoever wants to use it.  Whoops, I forgot the 
IP-rationing code; 373 lines.

Actually, if all you want is complaint-free certifications, that's easy to put 
in the proxy; just make it serve up different identifiers each time and keep a 
table of which IDs map to which client certs.  Makes it harder for the 
wikipedia admins to see patterns of abuse, though.  They'd have to report each 
incident and let the proxy admin decide when the threshold is reached.

The second is that the necessary changes to the Wikipedia software are 
probably more extensive than they might sound. Wikipedia tags each 
(anonymous) edit with the IP address from which it came. This information 
is displayed on the history page and is used widely throughout the site. 
Changing Wikipedia to use some other kind of identifier is likely to have 
far-reaching ramifications. Unless you can provide this client idenfier as 
a sort of virtual IP (fits in 32 bits) which you don't mind being displayed 
everywhere on the site (see objection 1), it is going to be expensive to 
implement on the wiki side.

There's that hopeless negativism again.  Do you want a real solution or not? 
Because I can think of at least 2 ways to solve that problem in a practical 
setting, and that's assuming that your assumption about MediaWiki being 
limited to 4-byte identifiers is even correct.

The simpler solution is to have the gateway proxy not be a hidden
service but to be a public service on the net which has its own exit
IP addresses. It would be a sort of virtual ISP which helps
anonymous users to gain the rights and privileges of 

nym-0.2.1 released (live demo available)

2005-10-02 Thread Jason Holt

I now have a live server available for those of you who want to play with a 
real nym tokenserver/CA/webserver.  This process constitutes running three 
scripts and installing the client cert.  Details in the README:

(Please be nice to

If enough people email me privately after trying it out, I'll proceed to the 
next phase, which will be working with the wikipedia guys to create a proxy 
server which should enable tor users to anonymously contribute but allow 
admins to block misbehaving IPs.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: nym-0.2 released (fwd)

2005-10-02 Thread Jason Holt

On Sun, 2 Oct 2005, cyphrpunk wrote:

1. Limting token requests by IP doesn't work in today's internet. Most

Hopeless negativism.  I limit by IP because that's what Wikipedia is already 
doing.  Sure, hashcash would be easy to add, and I looked into it just last 
night.  Of course, as several have observed, hashcash also leads to 
whack-a-mole problems, and the abuser doesn't even have to be savvy enough to 
change IPs.

Why aren't digital credential systems more widespread? As has been suggested 
here and elsewhere at great length, it takes too much infrastructure. It's too 
easy when writing a security paper to call swaths of CAs into existance with 
the stroke of the pen.  To assume that any moment now, people will start 
carrying around digital driver's licenses and social security cards (issued in 
the researcher's pet format), which they'll be happy to show the local library 
in exchange for a digital library card.

That's why I'm so optimistic about nym. A reasonable number of Tor users, a 
technically inclined group of people on average, want to access a single major 
site. That site isn't selling ICBMs; they mostly want people to have access 
anyway. They have an imperfect rationing system based on IPs. The resource is 
cheap, the policy is simple, and the user needs to conceal a single attribute 
about herself. There's a simple mathematical solution that yields certificates 
which are already supported by existing software. That, my friend, is a 
problem we can solve.

I suggest a proof of work system a la hashcash. You don't have to use
that directly, just require the token request to be accompanied by a
value whose sha1 hash starts with say 32 bits of zeros (and record
those to avoid reuse).

I like the idea of requiring combinations of scarce resources. It's definitely 
on the wishlist for future releases.  Captchas could be integrated as well.

2. The token reuse detection in signcert.cgi is flawed. Leading zeros
can be added to r which will cause it to miss the saved value in the
database, while still producing the same rbinary value and so allowing
a token to be reused arbitrarily many times.

Thanks for pointing that out! Shouldn't be hard to fix.

3. signer.cgi attempts to test that the value being signed is  2^512.
This test is ineffective because the client is blinding his values. He
can get a signature on, say, the value 2, and you can't stop him.

4. Your token construction, sign(sha1(r)), is weak. sha1(r) is only
160 bits which could allow a smooth-value attack. This involves
getting signatures on all the small primes up to some limit k, then
looking for an r such that sha1(r) factors over those small primes
(i.e. is k-smooth). For k = 2^14 this requires getting less than 2000
signatures on small primes, and then approximately one in 2^40 160-bit
values will be smooth. With a few thousand more signatures the work
value drops even lower.

Oh, I think I see. The k-smooth sha1(r) values then become bonus tokens, so 
we use a large enough h() that the result is too hard to factor (or, I suppose 
we could make the client present properly PKCS padded preimages).  I'll do 
some more reading, but I think that makes sense.  Thanks!


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

nym-0.2 released (fwd)

2005-09-30 Thread Jason Holt

-- Forwarded message --
Date: Sat, 1 Oct 2005 02:18:43 + (UTC)
From: Jason Holt [EMAIL PROTECTED]
Subject: nym-0.2 released

nym-0.2 is now available at:

My tor server is currently down, so I can't set up a public trial of this, but 
perhaps someone else will.  This release makes the following improvements:

* Tokens are now issued one-per-IP to clients via a token CGI script. Tokens 
are still blindly issued, so nobody (including the token issuer) can associate 
tokens with IP addresses.  The list of already-served IPs could be periodically 
removed, allowing users to obtain new pseudonyms on a regular basis.  (Abusers 
will then need to be re-blocked assuming they re-misbehave).

* A token can be used to obtain a signature on a client certificate from a 
separate CA CGI script (potentially on a different machine).  Tokens can only 
be spent to obtain one cert.  Code to make a CA, client certs and have the 
certs signed is included.

* The CA public key can be installed on a third web server (or proxy) to 
require that users have a valid client certificate.  Servers can maintain a 
blacklist of misbehaving client certs.  Misbehavers will then be unable to 
access the server until they obtain a new token and client cert (via a new IP).

My proposal for using this to enable tor users to play at Wikipedia is as 

1. Install a token server on a public IP.  The token server can optionally be 
provided Wikipedia's blocked-IP list and refuse to issue tokens to offending 
IPs.  Tor users use their real IP to obtain a blinded token.

2. Install a CA as a hidden service.  Tor users use their unblinded tokens to 
obtain a client certificate, which they install in their browser.

3. Install a wikipedia-gateway SSL web proxy (optionally also a hidden service) 
which checks client certs and communicates a client identifier to MediaWiki, 
which MediaWiki will use in place of the REMOTE_ADDR (client IP address) for 
connections from the proxy.  When a user misbehaves, Wikipedia admins block the 
client identifier just as they would have blocked an offending IP address.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Pseudonymity for tor: nym-0.1 (fwd)

2005-09-29 Thread Jason Holt

On Thu, 29 Sep 2005, Ian G wrote:

Couple of points of clarification - you mean here
CA as certificate authority?  Normally I've seen
Mint as the term of art for the center in a
blinded token issuing system, and I'm wondering
what the relationship here is ... is this something
in the 1990 paper?

Actually, it was just the closest paper at hand for what I was trying to do, 
which is nymous accounts, just as you say.  So I probably shouldn't have 
referred to spending at all.

My thinking is that if all Wikipedia is trying to do is enforce a low barrier 
of pseudonymity (where we can shut off access to persons, based on a rough 
assumption of scarce IPs or email addresses), a trivial blind signature system 
should be easy to implement.  No certs, no roles, no CRLs, just a simple 
blindly issued token.  And in fact it took me about 4 hours (while the 
conversation on or-talk has been going on for several days...)

There are two problems with what I wrote. First, the original system is 
intended for cash instead of pseudonymity, and thus leaves the spender a 
disincentive to duplicate other serial numbers (since you'd just be accused of 
double spending); this is a problem since if an attacker sees you use your 
token, he can get the same token signed for himself and besmirch your nym. And 
second, it would be a pain to glue my scripts into an existing authentication 

Both problems are overcome if, instead of a random token, the client blinds 
the hash of an X.509 client cert.  Then the returned signature gives you a 
complete client cert you can plug into your web browser (and which web servers 
can easily demand).  Of course, you can put anything you want in the cert, 
since the servers know that my CA only certifies 1 bit of data about users 
(namely, that they only get one cert per scarce resource).  But the public key 
(and verification mechanisms built in to TLS) keeps abusers from being able to 
pretend they're other users, since they won't have the users' private keys.

The frustrating part about this is the same reason why I'm getting out 
of the credential research business.  People have solved this problem 
before (although I didn't know of any Free solutions; ADDS and SOX are hard to 
google -- are they Free?).  I even came up with at least a proof of 
concept in an afternoon.  And yet the argument on the list went on and on, 
/without even an acknowledgement of my solution/.  Everybody just kept 
debating the definitions of anonymity and identity, and accusing each other of 
anarchy and tyranny.  We go round and round when we talk about authentication 
systems, but never get off the merry-go-round.

Contrast that with Debevec's work at Berkeley; Ph.D in 1996 on virtual 
cinematography, then The Matrix comes out in 1999 using his techniques and 
revolutionizes action movies.  Sure, graphics is easier because it doesn't 
require everyone to agree on an /infrastructure/, but then, neither does the 
tor/wikipedia problem.  I'm grateful for guys like Roger Dingledine and Phil 
Zimmerman who actually make a difference with a privacy system, but they seem 
to be the exception, rather than the rule.


So thanks for at least taking notice.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Pseudonymity for tor: nym-0.1 (fwd)

2005-09-28 Thread Jason Holt

-- Forwarded message --
Date: Thu, 29 Sep 2005 01:49:26 + (UTC)
From: Jason Holt [EMAIL PROTECTED]
Subject: Pseudonymity for tor: nym-0.1

Per the recent discussion regarding tor and wikipedia, I've hacked together an 
implementation of the basic system from Chaum, Fiat and Naor's 1990 
Untraceable Electronic Cash paper.  This system allows CAs to blindly issue 
tokens (or coins) which can then be spent elsewhere.  It runs in perl, and 
comprises a CA, nym-maker, client application and auth checker (for the 

The tarball is here:

Of course, it's useless at the moment since it gives out tokens 
indiscriminately (and probably has massive bugs), but if anyone actually cares 
about this idea, it will be (more or less) easy to do the following:

* Put up a sample CA and server that people can use (potentially as hidden 

* Make the CA issue only one token per email address, or one token per IP 
address, one per computational puzzle, one for every $20 mailed in...

* Automatically expire CA keys and generate new ones on a regular basis (rather 
than bothering with CRLs)

* Instead of randomly generated tokens, have the CA sign an actual X.509 cert 
request, which will then become a perfectly valid X.509 cert useful as a 
client-side cert in unmodified browsers and web servers

* Create some sort of aid for maintaining server-side (or CA) blacklists of 
improperly behaving users

* Check to see if the protocol is actually still secure and properly 

Comments welcome.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Clearing sensitive in-memory data in perl

2005-09-12 Thread Jason Holt

On Mon, 12 Sep 2005, Sidney Markowitz wrote:

Does anyone know of an open source crypto package written in perl that is 
careful to try to clear sensitive data structures before they are released to 
the garbage collector?


Securely deleting secrets is hard enough in C, much less high level languages. 
I've often considered trying to write a C-based module for secret storage, but 
it's problematic (although the Taint stuff looks promising) and to my 
knowledge has never been done.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: Query about hash function capability

2005-08-05 Thread Jason Holt

On Thu, 4 Aug 2005, Arash Partow wrote:

ie: input1 : abcdefg - h(abcdefg) = 123
   input2 : gabcdef - h(gabcdef) = 123
   input3 : fgabcde - h(fgabcde) = 123

I don't have a formal reference for you, but this seems intuitively correct to 
me: put the strings in a canonical form so that all equivalent strings reduce 
to the same string, then hash conventionally.  Eg., for rotation, the 
canonical form of a string is the rotation which gives the smallest value when 
the string is considered a binary number.  In other words, alphabetize all the 
rotations and then take the first one.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: New Credit Card Scam (fwd)

2005-07-12 Thread Jason Holt

On Mon, 11 Jul 2005, Lance James wrote:
place to fend off these attacks. Soon phishers will just use the site itself 
to phish users, pushing away the dependency on tricking the user with a 
spoofed or mirrored site.


You dismiss too much with your just.  They already do attack plenty of 
sites, but they also phish because it has a larger return on investment. 
Security is the process of iteratively strengthening the weakest links in the 


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: /dev/random is probably not

2005-07-01 Thread Jason Holt

On Fri, 1 Jul 2005, Charles M. Hannum wrote:

Most implementations of /dev/random (or so-called entropy gathering daemons)
rely on disk I/O timings as a primary source of randomness.  This is based on
a CRYPTO '94 paper[1] that analyzed randomness from air turbulence inside the
drive case.

I was recently introduced to Don Davis and, being the sort of person who
rethinks everything, I began to question the correctness of this methodology.
While I have found no fault with the original analysis (and have not actually
considered it much), I have found three major problems with the way it is
implemented in current systems.  I have not written exploits for these


You may be correct, but readers should also know that, at least in Linux:

  * All of these routines try to estimate how many bits of randomness a
  * particular randomness source.  They do this by keeping track of the
  * first and second order deltas of the event timings.

And then the inputs are run through a SHA hash before being released through 


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: encrypted tapes (was Re: Papers about Algorithm hiding ?)

2005-06-09 Thread Jason Holt

On Wed, 8 Jun 2005, David Wagner wrote:

That said, I don't see how adding an extra login page to click on helps.
If the front page is unencrypted, then a spoofed version of that page
can send you to the wrong place.  Sure, if users were to check SSL
certificates extremely carefully, they might be able to detect the funny
business -- but we know that users don't do this in practice.

Dan Bernstein has been warning of this risk for many years.[EMAIL PROTECTED][EMAIL PROTECTED]

As far as I can tell, if the front page is unencrypted, and if the
attacker can mount DNS cache poisoning, pharming, or other web spoofing
attacks -- then you're hosed.  Did I get something wrong?

Well, yes.  TLS guarantees that you're talking to the website listed in the 
location bar.  Knowing what domain you *wanted* is up to you, and Dan handles 
that by suggesting that perhaps you have a paper brochure from the bank which 
lists their domain.

So, it's fine to have link to (or for forms requesting anything sensitive as long as (or is what's printed in the brochure.  As Dan points out, 
examination of the certificate is generally pointless as long as it's signed 
by a trusted CA, since the attacker can get a perfectly valid cert for anyway.  The big question is just whether the domain asking 
for your account info corresponds with the organization you trust with it.

Of course, brochures aren't exactly hard to spoof (cf. Verisign's fraudulent 
domain renewal postcards).  And then there are the dozens of CAs your browser 
accepts, the CA staff who issue certs to random passersby, 
international domain names that look identical to, er, national ones.  All 
those gotchas apply even in the correct implementation outlined by Dan.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: encrypted tapes

2005-06-09 Thread Jason Holt

On Wed, 8 Jun 2005, Perry E. Metzger wrote:

Dan Kaminsky [EMAIL PROTECTED] writes:

2) The cost in question is so small as to be unmeasurable.

Yes, because key management is easy or free.

In this case it is. As I've said, even having all your tapes for six
months at a time use the same key is better than putting the tapes in
the clear.

If you have no other choice, pick keys for the next five years,
changing every six months, print them on a piece of paper, and put it
in several safe deposit boxes. Hardcode the keys in the backup
scripts. When your building burns to the ground, you can get the tapes
back from Iron Mountain and the keys from the safe deposit box.


If in-transit attacks are the real problem, just email/fax/phone the key when 
you ship the tapes, and have them stick it in the box when it arrives.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: comments wanted on gbde

2005-03-13 Thread Jason Holt

On Sun, 6 Mar 2005, David Wagner wrote:

 However, I also believe it is possible -- and, perhaps, all too easy --
 to use GBDE in a way that will not provide adequate security.  My biggest
 fear is that safe usage is just hard enough that many users will end up
 being insecure.  GBDE uses a passphrase to encrypt the disk.  If you can
 guess the passphrase, you can decrypt the disk.  Now in theory, all we
 have to do is tell users to pick a passphrase with at least 80 bits of
 entropy.  However, in practice, this is a pipe dream.  As we know, users
 often pick passphrases with very little entropy.  Practices vary widely,
 but for many users, an estimate in the range of 10-40 bits is probably
 reasonable.  Consequently, dictionary attacks are a very serious threat.
 GBDE does not take any steps to defend against dictionary attacks.
 In GBDE, a passphrase with b bits of entropy can be broken with 2^b AES
 trial decryptions.  This means that people who are using passphrases
 with only 10-40 bits of entropy may be screwed -- their data will not
 be secure against any serious attack.  40-bit security is a very weak
 level of protection.

What would you consider an ideal key management solution for disk encryption,
then?  It seems like any passphrase-based system will be
dictionary-attackable, even with strengthening techniques like iteration,
which provide a linear increase in difficulty to both normal use an attack.  
Is that all you were asking for, or are you thinking of token (or network)  
based solutions which can handle better keys than the average human?  (Or, are
there other more exotic techniques which make passphrases harder to break?)


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

MD2 is not one way (!?)

2004-09-08 Thread Jason Holt

The list of accepted papers for AsiaCrypt:

Includes one titled The MD2 Hash Function is Not One-Way.  That's the first
I've heard about MD2; the other breaks were for md4 and md5.  Anyone know


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: How thorough are the hash breaks, anyway?

2004-08-26 Thread Jason Holt

On Thu, 26 Aug 2004, Trei, Peter wrote:
 While any weakness is a concern, and I'm not
 going to use any of the compromised algorithms
 in new systems, this type of break seems to be
 of limited utility. 
 It allows you (if you're fortunate) to modify a signed
 message and have the signature still check out. 
 However, if you don't know the original plaintext
 it does not seem to allow you construct a second
 message with the same hash.

The Wikipedia article on hashes is pretty good on this topic:

So far, we know that the affected hashes are not collision resistant.  They
may still be at least somewhat one way and second preimage resistant, in which
case systems which only require those properties might still be safe.  But any
system which specifies a secure hash in the general sense would have to come
under very close scrutiny to see if it makes any assumptions at all about
collision resistance.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Hiawatha's research

2004-06-16 Thread Jason Holt

Hiawatha's Research
June, 2004, released into the public domain.
Dedicated to Eric Rescorla, with apologies to Longfellow.
(E. Rescorla may be substituted for Hiawatha throughout.)

Hiawatha, academic,
he could start ten research papers,
start them with such mighty study,
that the last had left his printer,
ere the first deadline extended.

Then, to serve the greater purpose,
he would post these master papers,
post them with such speed and swiftness,
to gain feedback from his cohorts,
for their mighty learned comments.

from his printer, Hiawatha
took his publication paper,
sent it to the preprint archive,
sent it out to all the newsgroups

Then he waited, watching, listening,
for the erudite discussion,
for the kudos and the errors,
that the others soon would send him.

But in this my Hiawatha
was most cruelly mistaken,
for not one did read his papers,
not one got past the simple abstract.

Still did they all grab their keyboards,
writing with great flaming fury
of the folly of his venture,
of his paper's great misgiving.
Of his obvious omissions,
of his great misunderstandings,
of his utter lack of vision,
of his blatant plagiarism.

(This last point he found most galling,
found it really quite dumbfounding,
since for prior art, he'd listed
ninety-three related papers.)

Now the mighty Hiawatha,
in his office still is sitting,
contemplating on his research,
thinking on his chosen topic.
Wondering, in idle moments,
if he had not chosen wrongly,
the position he had taken
as a research paper author

And he thinks, my Hiawatha,
if he might not have been better
served by a more lowly station,
as a cashier at McDonalds,
as a washer at the car wash,
as a cleaner of the bathrooms.
Thus departs my Hiawatha.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: chaum's patent expiry? (Re: Brands' private credentials)

2004-05-25 Thread Jason Holt

On Sun, 9 May 2004, Adam Back wrote:

 Anyone have to hand the expiry date on Chaum's patent?  (Think it is
 in patent section of AC for example; perhaps HAC also).

I think it's June 2005.  Actually, now that you mention Chaum, I'll have to
look into blind signatures with the BF IBE (issuing is just a scalar*point
multiply on a curve).  That could be a way to get CA anonymity for hidden
credentials - just do vanilla cut and choose on blinded pseudonymous
credential strings, then use a client/server protocol with perfect forward
secrecy so he can't listen in.  Hm, I'll have to think it out.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: who goes 1st problem

2004-05-25 Thread Jason Holt

[Adam and I are taking this discussion off-list to spare your inboxes, but
this message seemed particularly relevant.  Perhaps we'll come back later if
we come up with anything we think will be of general interest.]


On Tue, 11 May 2004, Adam Back wrote:

 Anyway the who goes 1st problem definition has my interest piqued: I
 am thinking this would be a very practically useful network protocol
 for privacy, if one could find a patent-free end-2-end secure (no
 server trust), efficient solution.  Another desirable feature I think
 is to not use too much funky crypto, people are justifiably nervous
 about putting experimental crypto into standards, even if it has
 security proofs until some peer review has happened.

Agreed.  Ninghui Li's RSA OSBEs might be the answer; they're not quite as
elegant as the IBE version, but they work with blinded RSA signatures, and so
should be patent-free by next year, assuming Ninghui doesn't seek any patents.  
Section 4 of his PODC paper describes the RSA implementation.  He also has a
new paper which does neat things with commitments that I haven't wrapped my
mind around yet.

Actually, we might also consider contacting Dan Boneh at some point; he seems
to be interested in the proliferation of IBE, and might be sympathetic to the
needs of the IETF to have free standards, especially considering the exposure
it'd get for his system.

However, we need to define just what we need to accomplish.  Since my lab
works in trust negotiation, we think in terms of policies a lot, whereas SSL
just assumes you know what certs you want to send to whom.  But let's assume
the SSL model for simplicity.

The second issue, now that I think of it in this context, would be how you
actually get your certs to the other guy.  Hidden credentials, as Ninghui
pointed out, assume you have some means for creating the other guy's cert,
eg., a template (nym):Senior_Agent:(current year) producing

The OSBE paper, OTOH, assumes we're going to exchange our certificates, just
without the CA signatures.  Then I can send you messages you can only read if
you really do have a signature on that cert.  But I've always thought that was
problematic, since why would honest people bother to connect then use fake
certs?  The attacker doesn't need to see the signature - he believes you.  So
honest users would need to regularly give out fake certs so they can hide
their legit behavior among the fake connects.  Will Winsborough also suggests
this with the notion of ACK policies - you *always* give people something they
ask for, so they can't tell what you have and what you don't.

So maybe what we really want is some sort of fair exchange or something, where
I can show you my valid certs as you show me the valid certs of your own.  

If one side is guessable, we've discussed this sort of thing with hidden

E(Hi Bob, since you're a senior agent, you can see my agent credential:
'Alice:Denver field office agent (apprentice):2004,

E(Hi Bob, since you're a BYU alumnus, you can see my BYU credential:
'Alice:Senior:computer science:3.96 gpa:2004,


So that's an open problem.  But let's assume guessable-certs, since that's the
only way I know how to really keep certs and policies safe for now. The
OSBE-RSA math still works.  So we're good so far, except that the RSA approach
is interactive.  Section 4 says that in the RSA scheme, Alice sends her cert
/and blinded signature/ to Bob (which may or may not be bogus), and then Bob
can send back an encrypted message.  (In HC and IBE-OSBEs, Bob doesn't need
the blinded signature to use as a public key).

But maybe Robert's improved secret sharing scheme from the new HC paper can 
give us some ideas:

1. Alice sends blinded signatures for each of her relevant certs, not
revealing which signature goes with each cert, and not revealing the cert

2. Bob generates the contents of each of Alice's certs relevant to his policy,
and simply generates each possible combination of hash-of-cert-contents and
blinded-signature.  One from each row will be a match-up between contents and
signature, and Alice will have to figure out which.  Unfortunately, this
requires n^2 multiplies and exponentiations.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Re: more hiddencredentials comments (Re: Brands' private credentials)

2004-05-25 Thread Jason Holt

On Mon, 10 May 2004, Adam Back wrote:
 OK that sounds like it should work.  Another approach that occurs is
 you could just take the plaintext, and encrypt it for the other
 attributes (which you don't have)?  It's usually not too challenging
 to make stuff deterministic and retain security.  Eg. any nonces,
 randomizing values can be taken from PRMG seeded with seed also sent
 in the msg.  Particularly that is much less constraining on the crypto
 system than what Bert-Jaap Koops had to do to get binding crypto to
 work with elgamal variant.
  In either case, though, you can't just trust that the server
  encrypted against patient OR doctor unless you have both creds and
  can verify that they each recover the secret.
 The above approach should fix that also right?

I don't quite get what you're suggesting.  Could you give a more concrete

  Hugo Krawczyk gave a great talk at Crypto about the going-first problem in
  IPSec, which is where I got the phrase.  He has a nice compromise in letting
  the user pick who goes first, but for some situations I think hidden
  credentials really would hit the spot.
 Unless it's signifcantly less efficient, I'd say use it all the time.

Well, I wouldn't complain. :)  (Although pairings are quite slow, on the order
of hundreds of milliseconds.)  Hilarie Orman presented it at an IETF meeting
to what was reportedly a lukewarm response, and they also raised the patent
issue.  Dan Boneh is sensitive to the issue of patented crypto, and was quite
considerate when I asked about it, but still has the same
vague statement in their FAQ about how they're not going to be evil with the
patent, so it's still up in the air whether IBE will be useful in IETF

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

Brands' private credentials

2004-05-08 Thread Jason Holt

Here's what I remember from about a year ago about the current state of
private credentials.  That recollection comes with no warranties express or

Last I heard, Brands started a company called Credentica, which seems to only
have a placeholder page (although it does have an info@ address).

I also heard that his credential system was never implemented, but that might
be wrong now.  Anna Lysyanskaya and Jan Camenisch came up with a credential
system that I hear is based on Brands'. Anna's dissertation is online and
might give you some clues.  They might also have been working on an

I came up with a much simpler system that has many similar properties to
Brands', and even does some things that his doesn't.  It's much less developed
than the other systems, but we did write a Java implementation and published a
paper at WPES last year about it.  I feel a little presumptuous mentioning it
in the context of the other systems, which have a much more esteemed set of
authors and are much more developed, but I'm also pretty confident in its

Note that most anonymous credential systems are encumbered by patents.  The
implementation for my system is based on the Franklin/Boneh IBE which they
recently patented, although there's another IBE system which may not be
encumbered and which should also work as a basis for Hidden Credentials.


The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]