Re: Client Certificate UI for Chrome?

2009-08-11 Thread Peter Gutmann
James A. Donald jam...@echeque.com writes:

For password-authenticated key agreement such as TLS-SRP or TLS-PSK to work, 
login has to be in the chrome.

Sure, but that's a relatively tractable UI problem (and see the comment below 
on Camino).  Certificates on the other hand are an apparently intractable 
business, commercial, user education, programming, social, and technical 
problem.  I'd much rather try and solve the former than the latter.

The problem with password auth is that no browser (with the exception of 
Camino) has made even the most basic attempt to do the UI for this properly.  
In all cases the browser pops up a dialog box, unconnected to the underlying 
operation or web page, that says Gimme your password in one way or another. 
This could be coming from anywhere, the browser, Javascript on the web page, 
another web page, who knows where, but since everyone knows that passwords are 
insecure there's no point in expending any effort to try and make them 
secure, and that's been the status quo for fifteen years.

What Camino does (and it's been awhile since I played with it, so I'll qualify 
that with what I hope it still does) is roll the password-entry box down out 
of the browser menu bar in a circular motion that's both hard to spoof and 
that unmistakably ties the credential-entry request both to the web page that 
it's associated with and to the browser rather than being some floating popup 
coming from who knows where or what.  This can no doubt be nitpicked, but it's 
better than any other browser (that I've seen) does.

More generally, I can't see that implementing client-side certs gives you much 
of anything in return for the massive amount of effort required because the 
problem is a lack of server auth, not of client auth.  If I'm a phisher then I 
set up my bogus web site, get the user's certificate-based client auth 
message, throw it away, and report successful auth to the client.  The browser 
then displays some sort of indicator that the high-security certificate auth 
was successful, and the user can feel more confident than usual in entering 
their credit card details.  All you're doing is building even more substrate 
for phishing attacks.

Without simultaneous mutual auth, which -SRP/-PSK provide but PKI doesn't, 
you're not getting any improvement, and potentially just making things worse 
by giving users a false sense of security.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


RE: [tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated

2009-08-11 Thread Jason Resch
Zooko Wilcox-O'Hearn wrote:

 [cross-posted to tahoe-...@allmydata.org and cryptogra...@metzdowd.com]

 Folks:

 It doesn't look like I'm going to get time to write a long post about 
 this bundle of issues, comparing Cleversafe with Tahoe-LAFS (both use 
 erasure coding and encryption, and the encryption and key-management 
 part differs), and arguing against the ill-advised Fear, Uncertainty, 
 and Doubt that the Cleversafe folks have posted.  So, I'm going to 
 try to throw out a few short pieces which hopefully each make sense.

 First, the most important issue in all of this is the one that my 
 programming partner Brian Warner already thoroughly addressed in [1] 
 (see also the reply by Jason Resch [2]).  That is the issue of access 
 control, which is intertwined with the issues of key management.  The 
 other issues are cryptographic details which are important to get 
 right, but the access control and key management issues are the ones 
 that directly impact every user and that make or break the security 
 and usefulness of the system.

 Second, the Cleversafe documents seem to indicate that the security 
 of their system does not rely on encryption, but it does.  The data 
 in Cleversafe is encrypted with AES-256 before being erasure-coded 
 and each share stored on a different server (exactly the same as in 
 Tahoe-LAFS).  If AES-256 is crackable, then a storage server can 
 learn information about the file (exactly as in Tahoe-LAFS).  The 
 difference is that Cleversafe also stores the decryption key on the 
 storage servers, encoded in such a way that  any K of the storage 
 servers must cooperate to recover it.  In contrast, Tahoe-LAFS 
 manages the decryption key separately. 

You have stated how Cleversafe manages the key but not provided any details 
regarding how Tahoe-LAFS manages the decryption key?  In your documentation it 
was stated that many of your users choose to store the capability (containing 
the key) for their root file on your data storage servers.  I would think that 
this results in less security than Cleversafe's approach because our servers 
enforce authentication and access controls.

 This added step of including 
 a secret-shared copy of the decryption key on the storage servers 
 does not make the data less vulnerable to weaknesses in AES-256, as 
 their documents claim.  (If anything, it makes it more vulnerable, 
 but probably it has no effect and it is just as vulnerable to 
 weaknesses in AES-256 as Tahoe-LAFS is.)

I agree.  I should also note that the use of AES-256 or any cipher is a 
configuration parameter for our generalized transformation algorithm, which 
also can support stream ciphers.



 Third, I don't understand why Cleversafe documents claim that public 
 key cryptosystems whose security is based on math are more likely 
 to fall to future advances in cryptanalysis.  I think most 
 cryptographers have the opposite belief -- that encryption based on 
 bit-twiddling such as block ciphers or stream ciphers is much more 
 likely to fall to future cryptanalysis.  Certainly the history of 
 modern cryptography seems to fit with this -- of the original crop of 
 public key cryptosystems founded on a math problem, some are still 
 regarded as secure today (RSA, DH, McEliece), but there has been a 
 long succession of symmetric crypto primitives based on bit twiddling 
 which have then turned out to be insecure.  (Including, ominously 
 enough, AES-256, which was regarded as a gold standard until a few 
 months ago.)

Symmetric ciphers frequently break in small pieces at a time, reducing the 
number of bits of protection below what would be expected for the given key 
length.  If an asymmetric algorithm were to break (due to finding solutions to 
factoring or discrete logarithms) those algorithms would fail utterly, no 
length of a key could be considered secure.  This of course has not happened 
yet, but it remains a possibility unless it is someday proven that there is no 
efficient solution.  Even if math does not provide a path to breaking 
asymmetric ciphers, physics does by way of quantum computing.

Hundreds of symmetric ciphers have been devised and as weaknesses are found in 
currently used symmetric ciphers it is easy to migrate to other well-vetted 
algorithms.  Asymmetric ciphers are in short supply, and depend on discover 
trap door functions in math, so a break in them would offer fewer exit 
strategies.


 Fourth, it seems like the same access control/key management model 
 that Cleversafe currently offers could be achieved by encrypting the 
 data with a random AES key and then using secret sharing to split the 
 key and store on share of the key with each server.  I *think* that 
 this would have the same cryptographic properties as the current 
 Cleversafe approach of using an All-Or-Nothing-Transform followed by 
 erasure coding.  Both would qualify as computation secret sharing 
 schemes as opposed to information-theoretic secret 

RE: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-08-11 Thread Jason Resch
Zooko Wilcox-O'Hearn wrote:

 [dropping tahoe-dev from Cc:]

 On Thursday,2009-08-06, at 2:52 , Ben Laurie wrote:

  Zooko Wilcox-O'Hearn wrote:
  I don't think there is any basis to the claims that Cleversafe 
  makes that their erasure-coding (Information Dispersal)-based 
  system is fundamentally safer
 ...
  Surely this is fundamental to threshold secret sharing - until you 
  reach the threshold, you have not reduced the cost of an attack?

 I'm sorry, I don't understand your sentence.  Cleversafe isn't using 
 threshold secret sharing -- it is using All-Or-Nothing-Transform 
 (built out of AES-256) followed by Reed-Solomon erasure-coding.

I would define that combination as a threshold secret sharing scheme.  Noting 
of course what you said below in that it is a computationally-secure as opposed 
to Shamir's information theoretically secure scheme.

 The 
 resulting combination is a computationally-secure (not information-
 theoretically-secure) secret-sharing scheme.  The Cleversafe 
 documentation doesn't use these terms and is not precise about this, 
 but it seems to claim that their scheme has security that is somehow 
 better than the mere computational security that encryption typically 
 offers.

 Oh wait, now I understand your sentence.  You in your sentence is 
 the attacker.  Yes, an information-theoretically-secure secret-
 sharing scheme does have that property.  Cleversafe's scheme hasn't.


Recalling what the original poster said:
Surely this is fundamental to threshold secret sharing - until you 
reach the threshold, you have not reduced the cost of an attack?

Cleversafe's method does have this property, the difficulty in breaking the 
random transformation key does not decrease with the number of slices an 
attacker gets.  Though the difficulty is not infinite, (as is the case with an 
information theoretically secure scheme) it does remain fixed until a threshold 
is reached.

Jason

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


brute force physics Was: cleversafe...

2009-08-11 Thread Alexander Klimov
On Sun, 9 Aug 2009, Jerry Leichter wrote:
 Since people do keep bringing up Moore's Law in an attempt to justify
 larger keys our systems stronger than cryptography, it's worth
 keeping in mind that we are approaching fairly deep physical limits.
 I wrote about this on this list quite a while back.  If current
 physical theories are even approximately correct, there are limits to
 how many bit flips (which would encompass all possible binary
 operations) can occur in a fixed volume of space-time.

A problem with this reasoning is that the physical world and the usual
digital computers have exponential simulation gap (it is known at
least in one direction: to simulate N entangled particles on a digital
computer one needs computations exponential in N). This can be
considered as a reason to suspect that physical world is
non-polynomially faster than the usual computers (probably even to an
extent that all NP computations can be simulated).

While it is possible to use physical world to simulate usual computers
in the straightforward way (namely by using voltage levels of a
circuit to represent separate bits), it is not clear that doing
computations in this way is the best way to do computations, for
example, if the meaning of our computations are simulation of the
physical world, then it can be better to use direct
physical-to-physical mapping instead of physical-to-usual followed by
usual-to-physical: analog computers, such as wind tunnels, are still
in use.

I am not aware of any plausible argument why a brute force search in
general (a quintessence of NP class, by the way) or a key search
against any particular algorithm cannot be implemented in a direct way
significantly faster than in the indirect way, that is NP-to-physical
instead of NP-to-usual followed by usual-to-physical. All the fuss
about quantum computing is exactly because people believe that a
different mapping (not thru usual computers) can be more efficient (if
I understand correctly, right now neither the class of algorithms that
can be sped up this way is understood, nor the quantum computers of
practical capacity exist).

 All the protocols and standards out there calling for AES-256 - it's
 obviously better than AES-128 because after all 256 is *twice as
 large* as 128! - were just a bunch of nonsense.  And, perhaps,
 dangerous nonsense.

I see the situation in the positive way: the recent AES attacks
stress the fact that the key management should be done
correctly, in particular, keys should be derived thru KDF (not
a simple xor) and must be authenticated. With this attack in
hand it is much easier for us now to say why one should not use
K to encrypt messages of one type and K+1 for another type, or
why it is a bad idea to encrypt a key in CTR mode and store the
result without a MAC. I doubt it is possible to find any
professionally designed protocol or standard that becomes weak
due to the recent discovery.

Of course, if AES-256 was used (say for hashing) with input as
the key, then nobody should be surprised that the version that
takes 256 bits at a time is weaker than the version that spends
comparable time for processing only 128 bits.

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


RE: [tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated

2009-08-11 Thread Jason Resch
james hughes wrote:

 On Aug 6, 2009, at 1:52 AM, Ben Laurie wrote:

  Zooko Wilcox-O'Hearn wrote:
  I don't think there is any basis to the claims that Cleversafe makes
  that their erasure-coding (Information Dispersal)-based system is
  fundamentally safer, e.g. these claims from [3]: a malicious party
  cannot recreate data from a slice, or two, or three, no matter what 
  the
  advances in processing power. ... Maybe encryption alone is 'good
  enough' in some cases now  - but Dispersal is 'good always' and
  represents the future.
 
  Surely this is fundamental to threshold secret sharing - until you 
  reach
  the threshold, you have not reduced the cost of an attack?

 Until you reach the threshold, you do not have the information to 
 attack. It becomes information theoretic secure.

With a secret sharing scheme such as Shamir's you have information theoretic 
security.  With the All-or-Nothing Transform and dispersal the distinction is 
there is only computational security.  The practical difference is that though 
2^-256 is very close to 0, it is not 0, so the possibility remains that with 
sufficient computational power useful data could be obtained with less than a 
threshold number of slices.  The difficulty of this is as hard as breaking the 
symmetric cipher used in the transformation.



 They are correct, if you lose a slice, or two, or three that's fine, 
 but once you have the threshold number, then you have it all. This 
 means that you must still defend the site from attackers, protect your 
 media from loss, ensure your admins are trusted. As such, you have 
 accomplished nothing to make the management of the data easier.

Is there any data storage system which does not require some protection against 
attackers, resiliency to media failure, and trusted administrators?  Even in a 
systems where one encrypts the data and focuses all energy on keeping the key 
safe, the encrypted copies must still be protected for availability and 
reliability reasons.

The security provided by this approach is only the icing on the cake to the 
other benefits of dispersal.  Dispersal provides extremely high fault tolerance 
and reliability without the large storage requirements of making copies.  See 
this paper Erasure Coding vs. Replication: A Quantitative Comparison by the 
creators of OceanStore for a primer on some of the advantages: 
http://www.cs.rice.edu/Conferences/IPTPS02/170.pdf


 Assume your threshold is 5. You lost 5 disks... Whose information was 
 lost? Anyone? Do you know?

If a particular vault (Our term for a logical grouping of data on which 
access controls may be applied) had data stored on on a threshold number of 
compromised drives, then data in that vault would be considered compromised.  
Our systems tracks which vaults have data on which machines through a global 
set of configuration information we call the Registry.

 What if the 5 drives were lost over 5 
 years, what then?

When drives or machines are known to be lost or compromised one may perform a 
read and overwrite of the peer-slices.  This makes obsolete any slices 
attackers may have accumulated up until that point.  This is due to the fact 
that the AONT is a random transformation, and newly generated slices cannot be 
used with old ones to re-create data.  Therefore this protocol protects against 
slow accumulation of a threshold number of slices over time.

 CleverSafe can not provide any security guarantees 
 unless these questions can be answered. Without answers, CleverSafe is 
 neither Clever nor Safe.

 Jim



Please let me know if you have any additional questions regarding our 
technology.

Best Regards,

Jason Resch

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated

2009-08-11 Thread Zooko Wilcox-O'Hearn
This conversation has bifurcated, since I replied and removed tahoe- 
dev from the Cc: line, sending just to the cryptography list, and  
David-Sarah Hopwood has replied and removed cryptography, leaving  
just the tahoe-dev list.


Here is the root of the thread on the cryptography mailing list archive:

http://www.mail-archive.com/cryptography@metzdowd.com/msg10680.html

Here it is on the tahoe-dev mailing list archive.  Note that  
threading is screwed up in our mailing list archive.  :-(


http://allmydata.org/pipermail/tahoe-dev/2009-August/subject.html#start

Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated

2009-08-11 Thread Zooko Wilcox-O'Hearn

On Monday,2009-08-10, at 13:47 , Zooko Wilcox-O'Hearn wrote:


This conversation has bifurcated,


Oh, and while I don't mind if people want to talk about this on the  
tahoe-dev list, it doesn't have that much to do with tahoe-lafs  
anymore, now that we're done comparing Tahoe-LAFS to Cleversafe and  
are just arguing about the cryptographic design of Cleversafe.  ;-)   
So, it seems quite topical for the cryptography list and only  
tangentially topical for the tahoe-dev list.  I've also been enjoying  
the subthread about the physical limits of computation that have  
spawned off on the cryptography mailing list.  Ooh, were you guys  
considering only classical computers and not quantum computers when  
you estimated that either 2^128, 2^200 or 2^400 was the physical  
limit of possible computation?  :-)


Regards,

Zooko

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


FW: cleversafe says: 3 Reasons Why Encryption is Overrated

2009-08-11 Thread Jason Resch
Zooko Wilcox-O'Hearn wrote:

 [dropping tahoe-dev from Cc:]

 On Thursday,2009-08-06, at 2:52 , Ben Laurie wrote:

  Zooko Wilcox-O'Hearn wrote:
  I don't think there is any basis to the claims that Cleversafe 
  makes that their erasure-coding (Information Dispersal)-based 
  system is fundamentally safer
 ...
  Surely this is fundamental to threshold secret sharing - until you 
  reach the threshold, you have not reduced the cost of an attack?

 I'm sorry, I don't understand your sentence.  Cleversafe isn't using 
 threshold secret sharing -- it is using All-Or-Nothing-Transform 
 (built out of AES-256) followed by Reed-Solomon erasure-coding.

I would define that combination as a threshold secret sharing scheme.  Noting 
of course what you said below in that it is a computationally-secure as opposed 
to Shamir's information theoretically secure scheme.

 The 
 resulting combination is a computationally-secure (not information-
 theoretically-secure) secret-sharing scheme.  The Cleversafe 
 documentation doesn't use these terms and is not precise about this, 
 but it seems to claim that their scheme has security that is somehow 
 better than the mere computational security that encryption typically 
 offers.

 Oh wait, now I understand your sentence.  You in your sentence is 
 the attacker.  Yes, an information-theoretically-secure secret-
 sharing scheme does have that property.  Cleversafe's scheme hasn't.


Recalling what the original poster said:
Surely this is fundamental to threshold secret sharing - until you 
reach the threshold, you have not reduced the cost of an attack?

Cleversafe's method does have this property, the difficulty in breaking the 
random transformation key does not decrease with the number of slices an 
attacker gets.  Though the difficulty is not infinite, (as is the case with an 
information theoretically secure scheme) it does remain fixed until a threshold 
is reached.

Jason

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-11 Thread James A. Donald

--
 James A. Donald jam...@echeque.com writes:
 For password-authenticated key agreement such as
 TLS-SRP or TLS-PSK to work, login has to be in the
 chrome.

Peter Gutmann wrote:
 Sure, but that's a relatively tractable UI problem

Indeed.  You know how to solve it, and I know how to
solve it, yet the solution is not out there.

As you say, shared secrets should be entered a form that
implements password-authenticated key agreement such as
TLS-SRP or TLS-PSK, that cannot easily be spoofed, that
is clearly associated with the browser and with a
particular url and web page (you suggest that the form
should roll out of the browser bar with an eye catching
motion and land on top of the web page) and an encrypted
connection should be established by that shared
knowledge, which cannot be established without that
shared knowledge.

This, however, requires both client UI software, and an
api to server side scripts such as PHP, Perl, or Python
(the P in LAMP).  On the server side, we need a request
object in the script language that tells the script that
this request comes from an entity that established a
secure connection using shared secrets associated with
such and such a database record entered in response to
such and such a web page, an object to which the script
generating a page can associate data that persists for
the duration of the session - an object that has session
scope rather than page scope, scope longer and broader
than that of the thread of execution that generates the
page, but shorter and narrower than that of the database
record containing the shared secrets, a script
accessible object that can only be associated with one
server, one server side process and one server side
thread at a time.  This is non trivial to implement in
an environment where servers are massively
multithreaded, and often massively multiprocess.

 Certificates on the other hand are an apparently
 intractable business, commercial, user education,
 programming, social, and technical problem.  I'd much
 rather try and solve the former than the latter.

What makes certificates such a problem is that there is
someone in the middle issuing the certificate - usually
someone who does not know or trust either of the
entities trying to establish a trust relationship.

While certificates frequently makes cryptography
unnecessarily painful and complicated, certificate issue
offers the opportunity to make money out of providing
encryption by being that someone in the middle, hence
the remarkable enthusiasm for this technology, and
stubborn efforts to apply it to cases where its value is
limited, and it is far from being the most convenient,
practical, and straightforward solution.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-11 Thread Frank Siebenlist

[Moderator's note: top posting considered harmful:
 http://www.mail-archive.com/cryptography@metzdowd.com/msg09287.html
   --Perry]

Just to complicate things a little... we're working with a number of  
groups now who are using onlineCAs that issue short-lived x509 certs  
derived from a primary authN mechanism like passwords or OTP.


It would be great to bake that functionality into chrome: use TLS-SRP/ 
PSK to authN to an onlineCA to obtain your short-lived cert in real- 
time.


-Frank.


On Aug 6, 2009, at 5:49 AM, Peter Gutmann wrote:


Ben Laurie b...@google.com writes:


So, I've heard many complaints over the years about how the UI for
client certificates sucks. Now's your chance to fix that problem -
we're in the process of thinking about new client cert UI for Chrome,
and welcome any input you might have. Obviously fully-baked proposals
are more likely to get attention than vague suggestions.


This is predicated on the assumption that it's possible to make  
certificates
usable for general users.  All the empirical evidence we have to  
date seems to
point to this not being the case.  Wouldn't it be better to say  
What can we
do to replace certificates with something that works?, for example  
TLS-SRP

or TLS-PSK?

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


---
Frank Siebenlist - fra...@mcs.anl.gov
The Globus Alliance | Argonne National Laboratory | University of  
Chicago


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Entropy USB key

2009-08-11 Thread Alex Pankratov
Just spotted this on one of the tech news aggregators - 

http://www.entropykey.co.uk
 
The Entropy Key, or eKey, is a small, unobtrusive and easily 
installed USB stick that generates high-quality random numbers, 
or entropy, which can improve the performance, security and 
reliability of servers. 

Alex


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-11 Thread Peter Gutmann
James A. Donald jam...@echeque.com writes:

This, however, requires both client UI software, and an api to server side
scripts such as PHP, Perl, or Python (the P in LAMP).  On the server side, we
need a request object in the script language that tells the script that this
request comes from an entity that established a secure connection using
shared secrets associated with such and such a database record entered in
response to such and such a web page, an object to which the script
generating a page can associate data that persists for the duration of the
session - an object that has session scope rather than page scope, scope
longer and broader than that of the thread of execution that generates the
page, but shorter and narrower than that of the database record containing
the shared secrets, a script accessible object that can only be associated
with one server, one server side process and one server side thread at a
time. This is non trivial to implement in an environment where servers are
massively multithreaded, and often massively multiprocess.

Ah, that is a good point, you now need the credential information present at
the TLS level rather than the tunneled-protocol level (and a variation of
this, although I think one that way overcomplicates things because it starts
diverting into protocol redesign, is the channel binding problem (see RFC 5056
and, specific to TLS, draft-altman-tls-channel-bindings-05.txt)).  On the
other hand is this really such an insurmountable obstable?  For client-cert
usage you already need to perform a lookup based on a given cert (well, unless
you blindly trust anyone displaying a cert coming from a particular CA or set
of CAs, which I know some sites do), so now all you'd be doing is looking up a
shared-secret value instead of a cert based on a client ID.  I don't really
see why you'd need complex scripting interfaces though, just return the
shared-secret value associated with this ID in response to a request from the
TLS layer.  The only problem I can see is if you have an auth system built
around is this authenticator valid rather than return the authenticator for
this ID.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com