Re: Client Certificate UI for Chrome?

2009-08-26 Thread Ben Laurie
On Mon, Aug 10, 2009 at 6:35 PM, Peter wrote:
 More generally, I can't see that implementing client-side certs gives you much
 of anything in return for the massive amount of effort required because the
 problem is a lack of server auth, not of client auth.  If I'm a phisher then I
 set up my bogus web site, get the user's certificate-based client auth
 message, throw it away, and report successful auth to the client.  The browser
 then displays some sort of indicator that the high-security certificate auth
 was successful, and the user can feel more confident than usual in entering
 their credit card details.  All you're doing is building even more substrate
 for phishing attacks.

 Without simultaneous mutual auth, which -SRP/-PSK provide but PKI doesn't,
 you're not getting any improvement, and potentially just making things worse
 by giving users a false sense of security.

I certainly agree that if the problem you are trying to solve is
server authentication, then client certs don't get you very far. I
find it hard to feel very surprised by this conclusion.

If the problem you are trying to solve is client authentication then
client certs have some obvious value.

That said, I do tend to agree that mutual auth is also a good avenue
to pursue, and the UI you describe fits right in with Chrome's UI in
other areas. Perhaps I'll give it a try.

The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to

a crypto puzzle about digital signatures and future compatibility

2009-08-26 Thread Zooko Wilcox-O'Hearn


My brother Nathan Wilcox asked me in private mail about protocol  
versioning issues.  (He was inspired by this thread on [1, 2, 3]).  After rambling for a while  
about my theories and experiences with such things, I remembered this  
vexing future-compatibility issue that I still don't know how to  

Here is a puzzle for you (I don't know the answer).

Would it be a reasonable safety measure to deploy a Tahoe-LAFS v1.6,  
which used SHA-2 but didn't know how to use SHA-3 (because it hasn't  
been defined yet), and then later deploy a Tahoe-LAFS v1.7, which did  
know how to use SHA-3, and have v1.7 writers produce new files which  
v1.6 readers can integrity-check (using SHA-2) and also v1.7 readers  
can integrity-check (using SHA-3)?

So far this seems like an obvious win, but then you have to say what  
if, after we've deployed v1.7, someone posts a perl script to  
sci.crypt which produces second-pre-images for SHA-2 (but not  
SHA-3)?  Then writers who are using Tahoe-LAFS v1.7 really want to be  
able to *stop* producing files which v1.6 readers will trust based on  
SHA-2, right?  And also, even if that doesn't happen and SHA-2 is  
still believed to be reliable, then what if some sneaky v1.7 user  
hacks his v1.7 software to make two different files, sign one of them  
with SHA-2 and the other wish SHA-3, and then put both hashes into a  
single immutable file cap and give it to a v1.6 reader, asking him to  
inspect the file and then pass it on to his trusted, v1.7-using,  


This at least suggests that the v1.7 readers need to check *all*  
hashes that are offered and raise an alarm if some verify and others  
don't.  Is that good enough?





The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to