RE: Client Certificate UI for Chrome?

2009-08-12 Thread Thomas Hardjono



 
 From: James A. Donald [jam...@echeque.com]
 Sent: Sunday, August 09, 2009 1:21 AM
 To: Thomas Hardjono
 Cc: Ben Laurie; Cryptography
 Subject: Re: Client Certificate UI for Chrome?

 Thomas Hardjono wrote:
   In this UI discussion, I think its less relevant
   whether trust is hierarchical or flat/p2p.

 The hard part is always certificate management, which
 has to be launched off existing trust and connections.

 So the question is, do we have certificate management
 that assumes that everyone has boundless trust in
 Verisign, and no trust in existing connections and
 existing shared knowledge, or do we have certificate
 management designed make use of any existing
 connections, trust, and shared knowledge, wherever it is
 to be found?

[bottom-posted]

Agree. Unfortunately (or fortunately) some browsers have
shipped with root CA certs for a number of years
 now, which does force the end-user to trust the CA.
This has been great for sales of SSL certs for VeriSign
and other CAs but there is still that fundamental problem
of educating the average user (Mom/Dad) about equating
trust with certs (or root CA certs).=20

I'm not sure if the Chrome folks would be prepared
to ship their browser without any CA certs loaded,
but that would be an interesting and perhaps even revolutionary idea.
Assuming the Chrome UI had a nice and easy way for users
to download and install certs (trust anchors), this
approach could level the playing field and allows
other modes of trust to be played-out.
Both LinkedIn and FaceBook could in fact be CAs
that issue certs based on the number of verified
connections that a person had.

/thomas/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-12 Thread James A. Donald

Thomas Hardjono wrote:
 I'm not sure if the Chrome folks would be prepared to
 ship their browser without any CA certs loaded,

Excessive distrust is inconvenient, excessive trust is
vulnerable.  It is better to remedy flaws by expanding
functionality rather than restricting it.

On the one hand, something like Verisign is very useful
to signify that an entity that calls itself a bank is in
fact regarded as a bank by governments and other major
banks, on the other hand, it is pretty useless for
designating membership of a group to other members of
the group, which is the major function of client side
certificates.

The number of globally important entities is necessarily
small, therefore a global namespace of globally unique
human memorable names, (such as Bank Of America) works
well for them.   The number of entities that have or
need keys is quite large, therefore Zooko's triangle
applies - globally unique human memorable names work
very badly for the vast majority of keyholders,
therefore a business whose job is enforcing global
uniqueness of human memorable names (such as Verisign)
is going to be a pain to deal with, for it is trying to
do something that really cannot be done, therefore in
practice will merely make it sufficiently difficult for
clients that scammers do not bother.

Even for banks, globally unique names are problematic.
A remarkably large number of banks are called something
National Bank, or First National Bank of something.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Ultimate limits to computation

2009-08-12 Thread Hal Finney
[Note subject line change]
Jerry Leichter writes:

 Since people do keep bringing up Moore's Law in an attempt to justify  
 larger keys our systems stronger than cryptography, it's worth  
 keeping in mind that we are approaching fairly deep physical limits.   
 I wrote about this on this list quite a while back.  If current  
 physical theories are even approximately correct, there are limits to  
 how many bit flips (which would encompass all possible binary  
 operations) can occur in a fixed volume of space-time.  You can turn  
 this into a limit based solely on time through the finite speed of  
 light:  A computation that starts at some point and runs for n years  
 can't involve a volume of space more than n light years in radius.   
 (This is grossly optimistic - if you want the results to come back to  
 the point where you entered the problem, the limit is n/2 light years,  
 which has 1/8 the spacial volume).  I made a very approximate guess at  
 how many bit-flips you could get in a time-space volume of a 100 light- 
 year sphere; the answer came out somewhere between 2^128 and 2^256,  
 though much closer to the former.  So physical limits prevent you from  
 doing a brute force scan - in fact, you can't even enumerate all  
 possible keys - in 100 years for key lengths somewhere not much more  
 than 128 bits.

Things may not be quite as favorable as this. Here is a posting I made
to cypherpunks in 2004:

To: cypherpu...@al-qaeda.net
Date: Wed,  4 Aug 2004 11:04:15 -0700 (PDT)
From: h...@finney.org (Hal Finney)
Subject: Re: On what the NSA does with its tech

MV writes:
 Yes.  They can't break a 128 bit key.  That's obvious.  (if all the
 atoms in the
 universe were computers... goes the argument).

Not necessarily, if nanotechnology works.  128 bits is big but not
that big.

Eric Drexler, in Nanosystems, section 12.9, predicts that a nanotech
based CPU fitting in a 400 nm cube could run at 1000 MIPS and consume
60 nanowatts, performing 10^16 instructions per second per watt.

Let's design a system to break a 128 bit cipher.  Let's suppose it has
to do 2^10 instructions per test, so this is 2^138 instructions total,
or about 10^41.  Let's let it run for four months, which is 10^7 seconds,
so our necessary processing rate is 10^34 instructions per second.

This means we need 10^34 IPS / 1000 MIPS or 10^25 of Drexler's gigahertz
cubes, call it 10^25 cubic microns or 10^7 cubic meters, a cube about
220 meters on a side.

The system will consume 10^25 * 60 nanowatts or about 6 * 10^17 watts.
Now, that's a lot.  It's four times what the earth receives from the sun.
So we have to build a disk four times the area (not volume) of the earth,
collect that power and funnel it to our computers.  Probably we would
scatter the computers throughout the disk, which would be mostly composed
of solar collectors.  (Keeping the disk gravitationally stable is left
as an exercise for the student, as is the tradeoff involved in making
it smaller but moving it closer to the sun.)

Fortunately, exhaustive key search is perfectly parallelizable so there
is no need for complex communications or synchronizations between the
processors.

As you can see, breaking 128 bit keys is certainly not a task which is
so impossible that it would fail even if every atom were a computer.
If we really needed to do it, it's not outside the realm of possibility
that it could be accomplished within 50 years, using nanotech and robotics
to move and reassemble asteroids into the necessary disk.

Now, 256 bit keys really are impossible, unless the whole contraption
above can be made to operate as an enormous, unified quantum computer,
in which case it could theoretically break even 256 bit keys.

512 bit keys... now those really are impossible.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: brute force physics Was: cleversafe...

2009-08-12 Thread Jerry Leichter

On Aug 10, 2009, at 4:42 AM, Alexander Klimov wrote:


On Sun, 9 Aug 2009, Jerry Leichter wrote:

Since people do keep bringing up Moore's Law in an attempt to justify
larger keys our systems stronger than cryptography, it's worth
keeping in mind that we are approaching fairly deep physical limits.
I wrote about this on this list quite a while back.  If current
physical theories are even approximately correct, there are limits to
how many bit flips (which would encompass all possible binary
operations) can occur in a fixed volume of space-time.


A problem with this reasoning is that the physical world and the usual
digital computers have exponential simulation gap (it is known at
least in one direction: to simulate N entangled particles on a digital
computer one needs computations exponential in N). This can be
considered as a reason to suspect that physical world is
non-polynomially faster than the usual computers (probably even to an
extent that all NP computations can be simulated).
When the first results about exponential speedup of factoring came  
out, people assumed that this was true in general.  But it isn't.  In  
particular, simple search, where you have only an equality test so you  
can't build a hash table or some kind of ordered structure - is O(N)  
on a traditional computer - and O(sqrt(N)) on a quantum computer.  I'm  
not sure what the current knowledge about what a quantum machine can  
do for NP computations, but there's no probably here.



While it is possible to use physical world to simulate usual computers
in the straightforward way (namely by using voltage levels of a
circuit to represent separate bits), it is not clear that doing
computations in this way is the best way to do computations, for
example, if the meaning of our computations are simulation of the
physical world, then it can be better to use direct
physical-to-physical mapping instead of physical-to-usual followed by
usual-to-physical: analog computers, such as wind tunnels, are still
in use.

I am not aware of any plausible argument why a brute force search in
general (a quintessence of NP class, by the way) or a key search
against any particular algorithm cannot be implemented in a direct way
significantly faster than in the indirect way, that is NP-to-physical
instead of NP-to-usual followed by usual-to-physical. All the fuss
about quantum computing is exactly because people believe that a
different mapping (not thru usual computers) can be more efficient (if
I understand correctly, right now neither the class of algorithms that
can be sped up this way is understood, nor the quantum computers of
practical capacity exist).
The physical arguments to which I was referring say *nothing* about  
how the computation is done.  It can be a QM computation as well.


In any case, the simple search result above applies directly to brute  
force:  For that problem, you only get a polynomial speedup anyway.



All the protocols and standards out there calling for AES-256 - it's
obviously better than AES-128 because after all 256 is *twice as
large* as 128! - were just a bunch of nonsense.  And, perhaps,
dangerous nonsense.


I see the situation in the positive way: the recent AES attacks
stress the fact that the key management should be done
correctly, in particular, keys should be derived thru KDF (not
a simple xor) and must be authenticated. With this attack in
hand it is much easier for us now to say why one should not use
K to encrypt messages of one type and K+1 for another type, or
why it is a bad idea to encrypt a key in CTR mode and store the
result without a MAC. I doubt it is possible to find any
professionally designed protocol or standard that becomes weak
due to the recent discovery.
That's a ... bizarre point of view.  :-)  Should freedom from related- 
key attacks be part of the definition of a secure encryption  
algorithm?  We should decide that on some rational basis, not on  
whether, with care, we can avoid such attacks.  Clearly, a system that  
*is* secure against such attacks is more robust.  Do we know how to  
build such a thing?  What's the cost of doing so?  But to say it's an  
*advantage* to have a weakness seems like some kind of odd moral  
argument:  If you're hurt by this it's because you *deserve* to be.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Ultimate limits to computation

2009-08-12 Thread Jerry Leichter

On Aug 11, 2009, at 2:47 PM, Hal Finney wrote:


[Note subject line change]
Jerry Leichter writes:


Since people do keep bringing up Moore's Law in an attempt to justify
larger keys our systems stronger than cryptography, it's worth
keeping in mind that we are approaching fairly deep physical limits.
I wrote about this on this list quite a while back.  If current
physical theories are even approximately correct, there are limits to
how many bit flips (which would encompass all possible binary
operations) can occur in a fixed volume of space-time


Things may not be quite as favorable as this. Here is a posting I made
to cypherpunks in 2004:

To: cypherpu...@al-qaeda.net
Date: Wed,  4 Aug 2004 11:04:15 -0700 (PDT)
From: h...@finney.org (Hal Finney)
Subject: Re: On what the NSA does with its tech

MV writes:

Yes.  They can't break a 128 bit key.  That's obvious.  (if all the
atoms in the
universe were computers... goes the argument).


Not necessarily, if nanotechnology works.  128 bits is big but not
that big.

Eric Drexler, in Nanosystems, section 12.9, predicts that a nanotech
based CPU fitting in a 400 nm cube could run at 1000 MIPS and consume
60 nanowatts, performing 10^16 instructions per second per watt.

Let's design a system to break a 128 bit cipher.  Let's suppose it has
to do 2^10 instructions per test, so this is 2^138 instructions total,
or about 10^41.  Let's let it run for four months, which is 10^7  
seconds,

so our necessary processing rate is 10^34 instructions per second
It must be the summer weather or something.  I've received a whole  
bunch of messages - mainly privately - that say either Here's another  
result that has a higher upper bound on computation or Here's a  
design for a machine that exceeds your bound.  Both ignore (a) how  
bounds work:  That fact that you have a weaker bound doesn't mean I  
don't have a stronger one; (b) that impossibility results can exist in  
physics, not just in mathematics.  True, the nature of such results  
are a bit different, since all our current physical theories might  
turn out to be wrong.  But, hey, maybe our understanding of  
computation or even mathematics has some fundamental flaw, too.


The estimate on the limits to brute-force search are mine, based on a  
*very* rough estimate that draws on the results in the following  
paper:  http://www.sciencedirect.com/science?_ob=ArticleURL_udi=B6TVM-46X8Y6W-1_user=10_rdoc=1_fmt=_orig=search_sort=d_docanchor=view=c_searchStrId=976695769_rerunOrigin=google_acct=C50221_version=1_urlVersion=0_userid=10md5=d98e72ef5fe3301fead2dbc93c2885e4


(I haven't actually read the paper; my analysis was based on an  
article I can't find that discussed the implications of this one.)


The basic summary of the author's result is:  [T]he total number of  
bits and number of operations for the universe does not exceed  
O(10^123).  I guessed about how this value scales (as the cube of the  
time - one factor for time, two for the size of the light sphere you  
can reach in that time; not 3 because the information content of space  
goes up as the area, *not* the volume - a very weird but by now  
standard result).


Now, my scaling technique may be completely flawed, or my computations  
may be wrong.  Or the paper could be wrong.  (I wouldn't bet on it.)   
Or the physics may be wrong.  (I *really* wouldn't bet on that.)  But  
the fact that there are other bounds around that are not as tight, or  
that one can describe a machine that would do better if there were a  
way to realize it, aren't evidence for any of these.  Bounds can be  
improved, and a description isn't a working machine.


In fact, the whole point of the article that I *did* read is that this  
result should make use re-examine the whole notion of a possible  
computation.  It's easy to describe a computation that would take more  
than 10^123 steps.  Ackerman's function exceeds that for pretty small  
input values.  We've traditionally said that a computation is  
possible if we can describe it fully.  But if it couldn't be  
realized by the entire universe - is that really a *useful* notion of  
possible?

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-12 Thread James A. Donald

James A. Donald jam...@echeque.com writes:
  [In order to implement strong password based
  encryption and authentication] on the server side,
  we need a request object in the script language that
  tells the script that this request comes from an
  entity that established a secure connection using
  shared secrets associated with such and such a
  database record entered in response to such and such
  a web page

Peter Gutmann wrote:
 Ah, that is a good point, you now need the credential
 information present at the TLS level rather than the
 tunneled-protocol level (and a variation of this,
 although I think one that way overcomplicates things
 because it starts diverting into protocol redesign, is
 the channel binding problem (see RFC 5056 and,
 specific to TLS,
 draft-altman-tls-channel-bindings-05.txt)).  On the
 other hand is this really such an insurmountable
 obstable?

Consider what would be involved in building the UI into
the Google browser, and also building the necessary
scripting support into Web2Py on Google App Engine.  It
is not a small job.

 I don't really see why you'd need complex scripting
 interfaces though, just return the shared-secret
 value associated with this ID in response to a
 request from the TLS layer.

This request is issued when the connection is being
established, before the URL is specified.  So it is
impossible to service that request from the script that
generates the web page.   So where are we servicing that
request?  Presumably, need to service it somewhere
within the Web Application Framework, for example within
Mod PHP or Web2Py.

Further, some applications, for example banks and share
registries, typically have several different ID tables
at a single domain, and several different kinds of
shared human memorable secret information associated
with each ID.

And, having established that association, then when the
URL is specified, and the script associated that URL is
finally invoked by the Web Application Framework, then
that script needs to be invoked with the relevant ID, or
better, the script then needs to be provided with a
database cursor pointing at the relevant ID.

Further, if we do the SRP dance every single page, it is
a huge performance hit, with many additional round
trips. One loses about 20 percent of one's market share
for each additional round trip.

So we only want to do the SRP dance on session
establishment, only want to do it once per human
session, once per logon, not once per TLS session.

Which means that the TLS layer has to cache the the
transient strong shared secret constructed from the weak
durable human memorable secret for the duration of the
Web Application Framework's logon and logoff and provide
the cached database cursor to the web page script at
every page request during a single logon session, which
requires a higher level of integration between TLS and
the Web Application Framework than either one was
originally designed for.

Which means a significant amount of work integrating
this stuff with any given web application framework.

Further, suppose, as is typical with banks, a given
domain name hosts multiple ID tables each with different
kinds of shared secret information.  In that case, we
can obtain multiple different kinds of SRP logons, each
relevant to certain web pages but not others, each with
different privilege levels, and the framework has to
enforce that, has to provide to each web page
information about the logon type, and ensure that
inappropriate web pages are never invoked at all, but
are 403ed when the user attempts to access a url through
a logon of inappropriate type.

We cannot rely on the server side web page script to 403
itself in response to inappropriate logon type, since
when a new kind of logon was introduced, no one would
ever go back and make sure that all the old web pages
correctly checked the logon type.  If the web page
script contains a line of code that says If such and
such, then do a 403,

then sooner or later someone will delete that code and
say Hey, it still works just fine..

This is starting to sound depressingly like a great deal
of work rewriting lots of complex, bugridden stuff in
web application frameworks that are already designed to
handle logons in a quite different way.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: brute force physics Was: cleversafe...

2009-08-12 Thread David Wagner
Alexander Klimov  wrote:
 A problem with this reasoning is that the physical world and the usual
 digital computers have exponential simulation gap (it is known at
 least in one direction: to simulate N entangled particles on a digital
 computer one needs computations exponential in N). This can be
 considered as a reason to suspect that physical world is
 non-polynomially faster than the usual computers (probably even to an
^^^
 extent that all NP computations can be simulated).
  

It is a commonly-held myth that quantum computers could solve
NP-complete problems, but most experts who have studied the issue
believe this is probably not the case.  There is no reason to think
that quantum computation could solve all NP problems in polynomial
time, and in fact, there are good reasons to believe this is
likely not the case.

(Do note that factoring is not NP-complete.)

See, e.g.

http://en.wikipedia.org/wiki/Quantum_computing#Relation_to_computational_complexity_theory

or for a more nuanced and deep treatment of these issues,

http://www.scottaaronson.com/papers/npcomplete.pdf

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


strong claims about encryption safety Re: [tahoe-dev] cleversafe says: 3 Reasons Why Encryption isOverrated

2009-08-12 Thread Zooko Wilcox-O'Hearn
[removing Cc: tahoe-dev as this subthread is not about Tahoe-LAFS.   
Of course, the subscribers to tahoe-dev would probably be interested  
in this subthread, but that just goes to show that they ought to  
subscribe to cryptogra...@metzdowd.com.]


On Monday,2009-08-10, at 11:56 , Jason Resch wrote:

I don't think there is any basis to the claims that Cleversafe  
makes that their erasure-coding (Information Dispersal)-based  
system is fundamentally safer, e.g. these claims from [3]: a  
malicious party cannot recreate data from a slice, or two, or  
three, no matter what the advances in processing power. ...  
Maybe encryption alone is 'good enough' in some cases now  - but  
Dispersal is 'good always' and represents the future.


It is fundamentally safer in that even if the transformation key  
were brute forced, the attacker only gains data from the slice,  
which in general will have 1/threshold the data.


Okay, so the Cleversafe method of erasure-coding ciphertext and  
storing the slices on different servers is safer in exactly the  
same way that encrypting and then giving an attacker only a part of  
the ciphertext is safer.  That is: having less ciphertext might  
hinder cryptanalysis a little, and also even if the attacker totally  
wins and is able to decrypt the ciphertext, at least he'll only get  
part of the plaintext that way.  On the other hand I might consider  
it scant comfort if I were told that the good news is that the  
attacker was able to read only the first 1/3 of each of your  
files.  :-)


But the Cleversafe method of appending the masked key to the last  
slice makes it less safe, because having the masked key might help a  
cryptanalyst quite a lot.


In any case, the claims that are made on the Cleversafe web site are  
wrong and misleading: a malicious party cannot recreate data from a  
slice, or two, or three, no matter what the advances in processing  
power [1].  It is easy for customers to believe this claim, because  
an honest party who is following the normal protocol is limited in  
this way and because information-theoretically-secure secret-sharing  
schemes have this property.  I kind of suspect that the Cleversafe  
folks got confused at some point by the similarities between their  
AONT+erasure-coding scheme and a secret-sharing scheme.


In any case, the statement quoted above is not true, and not only  
that isolated statement, but also the entire thrust of the  
encryption isn't safe but Cleversafe's algorithm is safer argument  
[2].  Just to pick out another of the numerous examples of misleading  
and unjustified claims along these lines, here is another: Given  
that the level of security provided by the AONT can be set  
arbitrarily high (there is no limit to the length of key it uses for  
the transformation), information theoretic security is not necessary  
as one can simply use a key so long that it could not be cracked  
before the stars burn out. [3].


On the other hand Cleversafe's arguments about key management being  
hard and about there being a trade-off between confidentiality and  
availability are spot on: [3].  Although I don't think that their  
strategy for addressing the key management issues is the best  
strategy, at least their description of the problem are correct.   
Also, if you ignore the ill-justified claims about security on that  
page, their explanation of the benefits of their approach is  
correct.  (Sorry if this comes off as smug -- I'm trying to be fair.)


(I'm not even going to address their third point [4] -- at least not  
until we take this conversation to the law mailing list! :-))


Okay, I think I've made my opinion about these issues fairly clear  
now, so I'll try to refrain from following-up to this subthread --  
the strong claims about encryption safety subthread -- unless there  
are some interesting new technical details that I haven't thought  
of.  By the way, when googling in the attempt to learn more  
information about the Cleversafe algorithm, I happened to see that  
Cleversafe is mentioned in this paper by Bellare and Rogaway: Robust  
Computational Secrete Sharing and a Unified Account of Classical  
Secret-Sharing Goals [5].  I haven't read that paper yet, but given  
the authors I would assume it is an excellent starting point for a  
modern study of the cryptographic issues.  :-)


I still do intend to follow-up on the subthread which I call So how  
do *you* do key management, then?, which I consider to be the most  
important issue for practical security of systems like these.


Regards,

Zooko, writing e-mail on his lunch break

[1] http://dev.cleversafe.org/weblog/?p=63
[2] http://dev.cleversafe.org/weblog/?p=95
[3] http://dev.cleversafe.org/weblog/?p=111
[4] http://dev.cleversafe.org/weblog/?p=178
[5] http://www.cs.ucdavis.edu/~rogaway/papers/rcss.html

-
The Cryptography Mailing List
Unsubscribe by sending