Re: [Cryptography] /dev/random is not robust

2013-10-14 Thread James A. Donald

On 2013-10-15 10:35, d...@deadhat.com wrote:

http://eprint.iacr.org/2013/338.pdf


No kidding.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-12 Thread James A. Donald

On 2013-10-11 15:48, ianG wrote:
Right now we've got a TCP startup, and a TLS startup.  It's pretty 
messy.  Adding another startup inside isn't likely to gain popularity.


The problem is that layering creates round trips, and as cpus get ever 
faster, and pipes ever fatter, round trips become a bigger an bigger 
problem.  Legend has it that each additional round trip decreases usage 
of your web site by twenty percent, though I am unaware of any evidence 
on this.





(Which was one thing that suggests a redesign of TLS -- to integrate 
back into IP layer and replace/augment TCP directly. Back in those 
days we -- they -- didn't know enough to do an integrated security 
protocol.  But these days we do, I'd suggest, or we know enough to 
give it a try.)


TCP provides eight bits of protocol negotiation, which results in 
multiple layers of protocol negotiation on top.


Ideally, we should extend the protocol negotiation and do crypto 
negotiation at the same time.


But, I would like to see some research on how evil round trips really are.

I notice that bank web pages take an unholy long time to come up, 
probably because one secure we page loads another, and that then loads a 
script, etc.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Elliptic curve question

2013-10-09 Thread James A. Donald

On 2013-10-08 03:14, Phillip Hallam-Baker wrote:


Are you planning to publish your signing key or your decryption key?

Use of a key for one makes the other incompatible.�


Incorrect.  One's public key is always an elliptic point, one's private 
key is always a number.


Thus there is no reason in principle why one cannot use the same key (a 
number) for signing the messages you send, and decrypting the messages 
you receive.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Iran and murder

2013-10-09 Thread James A. Donald

On 2013-10-08 02:03, John Kelsey wrote:

Alongside Phillip's comments, I'll just point out that assassination of key 
people is a tactic that the US and Israel probably don't have any particular 
advantages in.  It isn't in our interests to encourage a worldwide tacit 
acceptance of that stuff.


Israel is famous for its competence in that area.


And if the US is famously incompetent, that is probably lack of will,
rather than lack of ability.  Drones give the US technological supremacy in
the selective removal of key people


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread James A. Donald

On 2013-10-07 01:18, Phillip Hallam-Baker wrote:

We are not at war with Iran.


We are not exactly at peace with Iran either, but that is irrelevant, 
for presumably it was a Jew that did it, and Iran is at war with Jews.

(And they are none too keen on Christians, Bahais, or Zoroastrians either)


I am aware that there are people who would
like to start a war with Iran, the same ones who wanted to start the war
with Iraq which caused a half million deaths but no war crimes trials to
date.


You may not be interested in war, but war is interested in you.   You 
can reasonably argue that we should not get involved in Israel's 
problems, but you should not complain about Israel getting involved in 
Israel's problems.



Iran used to have a democracy


Had a democracy where if you opposed Mohammad Mosaddegh you got murdered 
by Islamists.


Which, of course differs only in degree from our democracy, where (to 
get back to some slight relevance to cryptography) Ladar Levison gets 
put out of business for defending the fourth Amendment, and Pax gets put 
on a government blacklist that requires him to be fired and prohibits 
his business from being funded for tweeting disapproval of affirmative 
action for women in tech.


And similarly, if Hitler's Germany was supposedly not a democracy, why 
then was Roosevelt's America supposedly a democracy?


I oppose democracy because it typically results from, and leads to, 
government efforts to control the thoughts of the people.  There is not 
a large difference between our government requiring Pax to be fired, and 
Mohammad Mosaddegh murdering Haj-Ali Razmara.  Democracy also frequently 
results in large scale population replacement and ethnic cleansing, as 
for example Detroit and the Ivory Coast, as more expensive voters get 
laid off and cheaper voters get imported.


Mohammed Moasddegh loved democracy because he was successful and 
effective in murdering his opponents, and the Shah was unwilling or 
unable to murder the Shah's opponents.


And our government loves democracy because it can blacklist Pax and 
destroy Levison.


If you want murder and blacklists, population replacement and ethnic 
cleansing, support democracy.  If you don't want murder and blacklists, 
should have supported the Shah.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-06 Thread James A. Donald

On 2013-10-04 23:57, Phillip Hallam-Baker wrote:

Oh and it seems that someone has murdered the head of the IRG cyber
effort. I condemn it without qualification.


I endorse it without qualification.  The IRG are bad guys and need 
killing - all of them, every single one.


War is an honorable profession, and is in our nature.  The lion does no 
wrong to kill the deer, and the warrior does no wrong to fight in a just 
war, for we are still killer apes.


The problem with the NSA and NIST is not that they are doing warlike 
things, but that they are doing warlike things against their own people.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-05 Thread James A. Donald

On 2013-10-05 16:40, james hughes wrote:

Instead of pontificating at length based on conjecture and conspiracy

 theories and smearing reputations based on nothing other than hot air

But there really is a conspiracy, which requires us to consider 
conjectures as serious risks, and people deserve to have their 
reputations smeared for the appearance of being in bed with that conspiracy.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ised

2013-10-04 Thread James A. Donald

On 2013-10-04 09:33, Phillip Hallam-Baker wrote:
The design of WSDL and SOAP is entirely due to the need to impedance 
match COM to HTTP.


That is fairly horrifying, as COM was designed for a single threaded 
environment, and becomes and incomprehensible and extraordinarily 
inefficient security hole in a multi threaded environment.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-03 Thread James A. Donald

On 2013-10-03 00:46, John Kelsey wrote:

a.  Most attacks come from protocol or mode failures, not so much crypto 
primitive failures.  That is, there's a reaction attack on the way CBC 
encryption and message padding play with your application, and it doesn't 
matter whether you're using AES or FEAL-8 for your block cipher.


The repeated failures of wifi are more crypto primitive failure, though 
underlying crypto primitives were abused in ways that exposed subtle 
weaknesses.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-03 Thread James A. Donald

On 2013-10-02 23:09, Phillip Hallam-Baker wrote:


No, the reason for baring multiple inheritance is not that it is too 
clever, it is that studies have shown that code using multiple 
inheritance is much harder for other people to understand than code 
using single inheritance.�


That is because of the class of problems for which it is appropriate to 
use multiple inheritance.




The original reason multiple inheritance was added to C was to support 
collections.


Was it?   And regardless of whether that was the reason, not what it is 
used for today.


So I can't see where C++ is helping. It is reducing, not improving my 
productivity.


C++ greatly improves my productivity, in particular the memory 
management classes std::unique_ptr and std::shared_ptr, though if using 
them means you have to use std::weak_ptr, then one has to pause and think.




___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] TLS2

2013-10-02 Thread James A. Donald

On 2013-10-02 13:18, Tony Arcieri wrote:

LANGSEC calls this: full recognition before processing

http://www.cs.dartmouth.edu/~sergey/langsec/occupy/ 
http://www.cs.dartmouth.edu/%7Esergey/langsec/occupy/


I disagree slightly with langsec.

At compile time you want an extremely powerful language for describing 
data, that can describe any possible data structure.


At run time, you want the least possible power, such that your 
recognizer can only recognize the specified and expected data structure.


Thus BER and DER are bad for the reasons given by Langsec, indeed they 
illustrate the evils that langsec condemns, but these criticisms do not 
normally apply to PER, since for PER, the dangerously great power exists 
only at compile time, and you would have to work pretty hard to retain 
any substantial part of that dangerously great power at run time.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NIST about to weaken SHA3?

2013-10-01 Thread James A. Donald

On 2013-10-01 08:51, Watson Ladd wrote:
On Mon, Sep 30, 2013 at 2:21 PM, James A. Donald jam...@echeque.com 
mailto:jam...@echeque.com wrote:



Weaker in ways that the NSA has examined, and the people that
chose the winning design have not.

This isn't true: Keccak's designers proposed a wide range of capacity 
parameters for different environments.


This is not Keccak's design.

This a new unexamined design somewhat resembling Keccak's design.

Or perhaps Keccak's design somewhat resembled what the NSA had already 
decided to do.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NIST about to weaken SHA3?

2013-10-01 Thread James A. Donald

On 2013-10-01 10:17, John Kelsey wrote:

Yeah, that plot to weaken sha3 is so secretive, we've been discussing it in 
public slide presentations and on public mailing lists for six months.


All big conspiracies get exposed - I would make a list, but that would 
derail the conversation.


It does not follow that there are no big powerful conspiracies.  On the 
contrary, we have compelling evidence of more big powerful conspiracies 
than one can shake a stick at.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Sha3

2013-10-01 Thread James A. Donald

On 2013-10-01 10:24, John Kelsey wrote:

If you want to understand what's going on wrt SHA3, you might want to look at 
the nist website


If you want to understand what is going on with SHA3, and you believe 
that NIST is frank, open, honest, and has no ulterior motives, you might 
want to look at the NIST website.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-01 Thread James A. Donald

On 2013-10-01 18:06, Ben Laurie wrote:




On 1 October 2013 01:10, James A. Donald jam...@echeque.com 
mailto:jam...@echeque.com wrote:


Further, google is unhappy that too-clever-code gives too-clever
programmers too much power, and has prohibited its employees from
ever doing something like protobufs again.


Wat? Sez who?


Protobufs is code generating code.  Not allowed by google style guide.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] encoding formats should not be committee'ized

2013-10-01 Thread James A. Donald

On 2013-10-01 22:08, Salz, Rich wrote:

Further, google is unhappy that too-clever-code gives too-clever programmers 
too much power, and has prohibited its employees from ever doing something like 
protobufs again.

Got any documentation for this assertion?
The google style guide prohibits too-clever code.  protobufs and gmock 
is too-clever code.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-10-01 Thread James A. Donald

On 2013-10-02 05:18, Jerry Leichter wrote:
To be blunt, you have no idea what you're talking about. I worked at 
Google until a short time ago; Ben Laurie still does. Both of us have 
written, submitted, and reviewed substantial amounts of code in the 
Google code base. Do you really want to continue to argue with us 
about what the Google Style Guide is actually understood within Google?


The google style guide, among other things, prohibits multiple direct 
inheritance and operator overloading, except where stl makes you do 
operator overloading.


Thus it certainly prohibits too-clever code.  The only debatable 
question is whether protobufs, and much of the rest of the old codebase, 
is too-clever code - and it certainly a lot more clever than operator 
overloading.


Such prohibitions also would prohibit the standard template library, 
except that that is also grandfathered in, and prohibits atl and wtl.


The style guide is designed for an average and typical programmer who is 
not as smart as the early google programmers.   If you prohibit anything 
like wtl, you prohibit the best.


Prohibiting programmers from using multiple inheritance is like the BBC 
prohibiting the world literally instead of mandating that it be used 
correctly.  It implies that the BBC does not trust its speakers to 
understand the correct use of literally, and google does not trust its 
programmers to understand the correct use of multiple direct inheritance.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-10-01 Thread James A. Donald

On 2013-10-01 14:36, Bill Stewart wrote:
It's the data representations that map them into binary strings that 
are a
wretched hive of scum and villainy, particularly because you can't 
depend on a

bit string being able to map back into any well-defined ASN.1 object
or even any limited size of ASN.1 object that won't smash your stack 
or heap.
The industry's been bitten before by a widely available open source 
library

that turned out to be vulnerable to maliciously crafted binary strings
that could be passed around as SNMP traps or other ASN.1-using messages.

Similarly, PGP's most serious security bugs were related to
variable-length binary representations that were trying to steal bits
to maximize data compression at the risk of ambiguity.
Scrounging a few bits here and there just isn't worth it. 



This is an inherent problem, not with ASN.1, but with any data 
representation that can represent arbitrary data.


The decoder should only be able to decode the data structure it expects, 
that its caller knows how to interpret, and intends to interpret.  
Anything else should fail immediately.  Thus our decoder should have 
been compiled from, a data description, rather than being a general 
purpose decoder.


Thus sender and receiver should have to agree on the data structure for 
any communication to take place, which almost automatically gives us a 
highly compressed format.


Conversely, any highly compressed format will tend to require and assume 
a known data structure.


The problem is that we do not want, and should not have, the capacity to 
send a program an arbitrary data structure, for no one can write a 
program that can respond appropriately to an arbitrary data structure.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-10-01 Thread James A. Donald

On 2013-10-01 14:36, Bill Stewart wrote:
It's the data representations that map them into binary strings that 
are a
wretched hive of scum and villainy, particularly because you can't 
depend on a

bit string being able to map back into any well-defined ASN.1 object
or even any limited size of ASN.1 object that won't smash your stack 
or heap.
The industry's been bitten before by a widely available open source 
library

that turned out to be vulnerable to maliciously crafted binary strings
that could be passed around as SNMP traps or other ASN.1-using messages.

Similarly, PGP's most serious security bugs were related to
variable-length binary representations that were trying to steal bits
to maximize data compression at the risk of ambiguity.
Scrounging a few bits here and there just isn't worth it.




BER and DER can express an arbitrary data structure - and thus can crash 
the program receiving the data, probably causing it to execute 
transmitted data as code.


The same, however, is true of every overly general line format. Incoming 
data should be parsed as the expected and bounded size data structure, 
thus we need something that can generate parsing code from a description 
of the data at compile time.  We need compile time descriptions of the 
data, not run time descriptions, because the program that uses the 
incoming data will unavoidably rely on compile time description of the data.


PER, however cannot receive unexpected data structures.

Thus all data should be transmitted as PER, or by a format with the 
properties of PER.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread James A. Donald

On 2013-09-30 14:34, Viktor Dukhovni wrote:

On Mon, Sep 30, 2013 at 05:12:06AM +0200, Christoph Anton Mitterer wrote:


Not sure whether this has been pointed out / discussed here already (but
I guess Perry will reject my mail in case it has):

https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3

I call FUD.  If progress is to be made, fight the right fights.

The SHA-3 specification was not weakened, the blog confuses the
effective security of the algorithtm with the *capacity* of the
sponge construction.


SHA3 has been drastically weakened from the proposal that was submitted 
and cryptanalyzed:  See for example slides 43 and 44 of

https://docs.google.com/file/d/0BzRYQSHuuMYOQXdHWkRiZXlURVE/edit



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread James A. Donald

On 2013-10-01 00:44, Viktor Dukhovni wrote:

Should one also accuse ESTREAM of maliciously weakening SALSA?  Or
might one admit the possibility that winning designs in contests
are at times quite conservative and that one can reasonably
standardize less conservative parameters that are more competitive
in software?


less conservative means weaker.

Weaker in ways that the NSA has examined, and the people that chose the 
winning design have not.


Why then hold a contest and invite outside scrutiny in the first place.?

This is simply a brand new unexplained secret design emerging from the 
bowels of the NSA, which already gave us a variety of backdoored crypto.


The design process, the contest, the public examination, was a lie.

Therefore, the design is a lie.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] TLS2

2013-09-30 Thread James A. Donald

On 2013-09-30 18:02, Adam Back wrote:

If we're going to do that I vote no ASN.1, and no X.509.  Just BNF format
like the base SSL protocol; 


Granted that ASN.1 is incomprehensible and horrid, but, since there is 
an ASN.1 compiler that generates C code we should not need to comprehend it.



base on PGP so you get web of trust,


PGP web of trust does not scale.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread James A. Donald

On 2013-10-01 08:24, John Kelsey wrote:

Maybe you should check your code first?  A couple nist people verified that the 
curves were generated by the described process when the questions about the 
curves first came out.


And a non NIST person verified that the curves were /not/ generated by 
the described process after the scandal broke.


The process that actually generates the curves looks like the end result 
of trying a trillion curves, until you hit one that has desirable 
properties, which desirable properties you are disinclined to tell 
anyone else.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread James A. Donald

On 2013-10-01 08:35, John Kelsey wrote:

Having read the mail you linked to, it doesn't say the curves weren't generated 
according to the claimed procedure.  Instead, it repeats Dan Bernstein's 
comment that the seed looks random, and that this would have allowed NSA to 
generate lots of curves till they found a bad one.


The claimed procedure would have prevented the NSA from generating lots 
of curves till they found a bad one - one with weaknesses that the NSA 
knows how to detect, but which other people do not yet know how to detect.


That was the whole point of the claimed procedure.

As with SHA3, the NSA/NIST is deviating from its supposed procedures in 
ways that remove the security properties of those procedures.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] encoding formats should not be committee'ized

2013-09-30 Thread James A. Donald

On 2013-10-01 04:22, Salz, Rich wrote:

designate some big player to do it, and follow suit?
Okay that data encoding scheme from Google protobufs or Facebook thrift.  Done.


We have a complie to generate C code from ASN.1 code

Google has a compiler to generate C code from protobufs source

The ASN.1 compiler is open source.  Google's compiler is not.

Further, google is unhappy that too-clever-code gives too-clever 
programmers too much power, and has prohibited its employees from ever 
doing something like protobufs again.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-09-29 Thread James A. Donald

On 2013-09-27 09:54, Phillip Hallam-Baker wrote:


Quite, who on earth thought DER encoding was necessary or anything 
other than incredible stupidity?


I have yet to see an example of code in the wild that takes a binary 
data structure, strips it apart and then attempts to reassemble it to 
pass to another program to perform a signature check. Yet every time 
we go through a signature format development exercise the folk who 
demand canonicalization always seem to win.


DER is particularly evil as it requires either the data structures to 
be assembled in the reverse order or a very complex tracking of the 
sizes of the data objects or horribly inefficient code. But XML 
signature just ended up broken.


We have a compiler that generates C code from ASN.1 code.  Does it not 
generate code behind the scenes that does all this ugly stuff for us 
without us having to look at the code?


I have not actually used the compiler, and I have discovered that hand 
generating code to handle ASN.1 data structures is a very bad idea, but 
I am told that if I use the compiler, all will be rainbows and unicorns.


You go first.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-29 Thread James A. Donald

On 2013-09-30 03:14, Lodewijk andré de la porte wrote:
2013/9/29 James A. Donald jam...@echeque.com 
mailto:jam...@echeque.com


(..) fact, they are not provably random, selected (...)

fixed that for you

It seems obvious that blatant lying about qualities of procedures must 
have some malignant intention, yet ignorance is as good an 
explanation. I don't think lying the other way would solve anything. 
It's obviously not especially secure.



The NIST ec curves are provably non random, and one can prove that NIST 
is lying about them, which is circumstantial but compelling evidence 
that they are backdoored:


   From: Gregory Maxwellgmaxw...@gmail.com  mailto:gmaxw...@gmail.com
   To: This mailing list is for all discussion about theory, design, and 
development of Onion Routing.
tor-t...@lists.torproject.org  mailto:tor-t...@lists.torproject.org
   Subject: Re: [tor-talk] NIST approved crypto in Tor?
   Reply-To:tor-t...@lists.torproject.org  
mailto:tor-t...@lists.torproject.org

   On Sat, Sep 7, 2013 at 4:08 PM, anonymous coward
   anonymous.cow...@posteo.de  mailto:anonymous.cow...@posteo.de  wrote:

   Bruce Schneier recommends **not** to use ECC. It is safe to
   assume he knows what he says.

   I believe Schneier was being careless there. The ECC parameter
   sets commonly used on the internet (the NIST P-xxxr ones) were
   chosen using a published deterministically randomized procedure.
   I think the notion that these parameters could have been
   maliciously selected is a remarkable claim which demands
   remarkable evidence.

   On Sat, Sep 7, 2013 at 8:09 PM, Gregory Maxwellgmaxw...@gmail.com  
mailto:gmaxw...@gmail.com  wrote:

   Okay, I need to eat my words here.

   I went to review the deterministic procedure because I wanted to see
   if I could repoduce the SECP256k1 curve we use in Bitcoin. They
   don’t give a procedure for the Koblitz curves, but they have far
   less design freedom than the non-koblitz so I thought perhaps I’d
   stumble into it with the “most obvious” procedure.

   The deterministic procedure basically computes SHA1 on some seed and
   uses it to assign the parameters then checks the curve order, etc..
   wash rinse repeat.

   Then I looked at the random seed values for the P-xxxr curves. For
   example, P-256r’s seed is c49d360886e704936a6678e1139d26b7819f7e90.

   _No_ justification is given for that value. The stated purpose of
   the “veritably random” procedure “ensures that the parameters cannot
   be predetermined. The parameters are therefore extremely unlikely to
   be susceptible to future special-purpose attacks, and no trapdoors
   can have been placed in the parameters during their generation”.

   Considering the stated purpose I would have expected the seed to be
   some small value like … “6F” and for all smaller values to fail the
   test. Anything else would have suggested that they tested a large
   number of values, and thus the parameters could embody any
   undisclosed mathematical characteristic whos rareness is only
   bounded by how many times they could run sha1 and test.

   I now personally consider this to be smoking evidence that the
   parameters are cooked. Maybe they were only cooked in ways that make
   them stronger? Maybe

   SECG also makes a somewhat curious remark:

   “The elliptic curve domain parameters over (primes) supplied at each
   security level typically consist of examples of two different types
   of parameters — one type being parameters associated with a Koblitz
   curve and the other type being parameters chosen verifiably at
   random — although only verifiably random parameters are supplied at
   export strength and at extremely high strength.”

   The fact that only “verifiably random” are given for export strength
   would seem to make more sense if you cynically read “verifiably
   random” as backdoored to all heck. (though it could be more
   innocently explained that the performance improvements of Koblitz
   wasn’t so important there, and/or they considered those curves weak
   enough to not bother with the extra effort required to produce the
   Koblitz curves).


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-29 Thread James A. Donald
Gregory Maxwell on the Tor-talk list has found that NIST approved 
curves, which is to say NSA approved curves, were not generated by the 
claimed procedure, which is a very strong indication that if you use 
NIST curves in your cryptography, NSA can read your encrypted data.


As computing power increases, NSA resistant RSA key have become 
inconveniently large, so have to move to EC keys.


NIST approved curves are unlikely to be NSA resistant.

Therefore, everyone should use Curve25519, which we have every reason to 
believe is unbreakable.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NIST about to weaken SHA3?

2013-09-29 Thread James A. Donald

On 2013-09-30 13:12, Christoph Anton Mitterer wrote:

https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3


This makes NIST seem somehow like liars

If one lie, all lies.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA recommends against use of its own products.

2013-09-29 Thread James A. Donald

On 2013-09-29 23:13, Jerry Leichter wrote:

BTW, the *idea* behind DER isn't inherently bad - but the way it ended up is 
another story.  For a comparison, look at the encodings Knuth came up with in 
the TeX world.  Both dvi and pk files are extremely compact binary 
representations - but correct encoders and decoders for them are plentiful.



DER is unintelligble and incomprehensible.  There is, however, an open 
source complier for ASN.1


Does it not produce correct encoders and decoders for DER?  (I have 
never used it)

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread James A. Donald

On 2013-09-28 01:23, Phillip Hallam-Baker wrote:


Most cryptolibraries have a hard coded limit at 4096 bits and there 
are diminishing returns to going above 2048. Going from 4096 to 8192 
bits only increases the work factor by a very small amount and they 
are really slow which means we end up with DoS considerations.


We really need to move to EC above RSA. Only it is going to be a 
little while before we work out which parts have been contaminated by 
NSA interference and which parts are safe from patent litigation. RIM 
looks set to collapse with or without the private equity move. The 
company will be bought with borrowed money and the buyers will use the 
remaining cash to pay themselves a dividend. Mitt Romney showed us how 
that works.


We might possibly get lucky and the patents get bought out by a white 
knight. But all the mobile platform providers are in patent disputes 
right now and I can't see it likely someone will plonk down $200 
million for a bunch of patents and then make the crown jewels open.



Problem with the NSA is that its Jekyll and Hyde. There is the good 
side trying to improve security and the dark side trying to break it. 
Which side did the push for EC come from?


In fact we do know this.

NSA NIST claimed that their EC curves are provably random (therefore not 
backdoored)


In fact, they are provably non random, selected on an unrevealed basis, 
which contradiction is, under the circumstances, compelling evidence 
that the NIST curves are in fact backdoored.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Random number generation influenced, HW RNG

2013-09-11 Thread James A. Donald

On 2013-09-10 4:30 PM, ianG wrote:
The question of whether one could simulate a raw physical source is 
tantalising.  I see diverse opinions as to whether it is plausible, 
and thinking about it, I'm on the fence.


Let us consider that source of colored noise with which we are most 
familiar:  The human voice.  Efforts to realistically simulate a human 
voice have not been very successful.  The most successful approach has 
been the ransom note approach, merging together a lot of small clips of 
an actual human voice.


A software simulated raw physical noise source would have to run 
hundreds of thousands times faster.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Usage models (was Re: In the face of cooperative end-points, PFS doesn't help)

2013-09-11 Thread James A. Donald

On 08/09/2013 21:51, Perry E. Metzger wrote:
 I wrote about this a couple of weeks ago, see:

 http://www.metzdowd.com/pipermail/cryptography/2013-August/016872.html

In short, https to a server that you /do/ trust.

Problem is, joe average is not going to set up his own server. Making 
setting up your own server user friendly is the same problem as making 
OTR user friendly, with knobs on.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Squaring Zooko's triangle

2013-09-10 Thread James A. Donald

On 2013-09-10 3:12 AM, Peter Fairbrother wrote:
I like to look at it the other way round, retrieving the correct name 
for a key.


You don't give someone your name, you give them an 80-bit key 
fingerprint. It looks something like m-NN4H-JS7Y-OTRH-GIRN. The m- is 
common to all, it just says this is one of that sort of hash.


1.  And they run away screaming.

2.  It only takes 2^50 trials to come up with a valid fingerprint that 
agrees with your fingerprint except at four non chosen places.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] [cryptography] Random number generation influenced, HW RNG

2013-09-09 Thread James A. Donald

 would you care to explain the very strange design decision
 to whiten the numbers on chip, and not provide direct
 access to the raw unwhitened output.

On 2013-09-09 2:40 PM, David Johnston wrote:
 #1 So that that state remains secret from things trying to
 discern that state for purposes of predicting past or
 future outputs of the DRBG.

This assumes the DRGB is on chip, which it should not be.  It
should be in sofware.  Your argument is circular.  You are
arguing that the DRGB should be on chip, because it is on
chip, that is has some of its menacing characteristics
because it has other menacing characteristics.

 #2 So that one thread cannot undermine a second thread by
 putting the DRNG into a broken mode. There is only one
 DRNG, not one per core or one per thread. Having one DRNG
 per thread would be one of the many preconditions necessary
 before this could be contemplated.

You repeat yourself.  Same circular argument repeated.

 #3 Any method of access is going have to be documented and
 supported and maintained as a constant interface across
 many generations of chip.

Why then throw in RDSEED?

You are already adding RDSEED to RDRAND, which which fails to
address any of the complaints.  Why provide a DRNG in the
first place.

Answer:  It is a NIST design, not an Intel design.  Your design
documents reference NIST specifications. And we
already know that NIST designs are done with hostile intent.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Bruce Schneier has gotten seriously spooked

2013-09-08 Thread James A. Donald

On 2013-09-08 4:36 AM, Ray Dillinger wrote:


But are the standard ECC curves really secure? Schneier sounds like 
he's got

some innovative math in his next paper if he thinks he can show that they
aren't.


Schneier cannot show that they are trapdoored, because he does not know 
where the magic numbers come from.


To know if trapdoored, have to know where those magic numbers come from.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Techniques for malevolent crypto hardware

2013-09-08 Thread James A. Donald

On 2013-09-09 11:15 AM, Perry E. Metzger wrote:

Lenstra, Heninger and others have both shown mass breaks of keys based
on random number generator flaws in the field. Random number
generators have been the source of a huge number of breaks over time.

Perhaps you don't see the big worry, but real world experience says
it is something everyone else should worry about anyway.


Real world experience is that there is nothing to worry about /if you do 
it right/.  And that it is frequently not done right.


When you screw up AES or such, your test vectors fail, your unit test 
fails, so you fix it, whereas if you screw up entropy, everything 
appears to work fine.


It is hard, perhaps impossible, to have test suite that makes sure that 
your entropy collection works.


One can, however, have a test suite that ascertains that on any two runs 
of the program, most items collected for entropy are different except 
for those that are expected to be the same, and that on any run, any 
item collected for entropy does make a difference.


Does your unit test check your entropy collection?

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Impossible trapdoor systems (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-08 Thread James A. Donald

On 2013-09-09 4:49 AM, Perry E. Metzger wrote:

Your magic key must then take any block of N bits and magically
produce the corresponding plaintext when any given ciphertext
might correspond to many, many different plaintexts depending
on the key. That's clearly not something you can do.


Suppose that the mappings from 2^N plaintexts to 2^N ciphertexts are not 
random, but rather orderly, so that given one element of the map, one 
can predict all the other elements of the map.


Suppose, for example the effect of encryption was to map a 128 bit block 
to a group, map the key to the group, add the key to the block, and map 
back.  To someone who knows the group and the mapping, merely a heavily 
obfuscated 128 bit Caesar cipher.


No magic key.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on BULLRUN)

2013-09-08 Thread James A. Donald

On 2013-09-09 6:08 AM, John Kelsey wrote:

a.  Things that just barely work, like standards groups, must in general be 
easier to sabotage in subtle ways than things that click along with great 
efficiency.  But they are also things that often fail with no help at all from 
anyone, so it's hard to tell.

b.  There really are tradeoffs between security and almost everything else.  If 
you start suspecting conspiracy every time someone is reluctant to make that 
tradeoff in the direction you prefer, you are going to spend your career 
suspecting everyone everywhere of being ant-security.  This is likely to be 
about as productive as going around suspecting everyone of being a secret 
communist or racist or something.

Poor analogy.

Everyone is a racist, and most people lie about it.

Everyone is a communist in the sense of being unduly influenced by 
Marxist ideas, and those few of us that know it have to make a conscious 
effort to see the world straight, to recollect that some of our supposed 
knowledge of the world has been contaminated by widespread falsehood.


The Climategate files revealed that official science /is/ in large part 
a big conspiracy against the truth.


And Snowden's files seem to indicate that all relevant groups are 
infiltrated by people hostile to security.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on BULLRUN

2013-09-06 Thread James A. Donald

On 2013-09-06 12:31 PM, Jerry Leichter wrote:

Another interesting goal:  Shape worldwide commercial cryptography marketplace to make it more tractable to advanced 
cryptanalytic capabilities being developed by NSA/CSS.  Elsewhere, enabling access and exploiting systems 
of interest and inserting vulnerabilities.  These are all side-channel attacks.  I see no other reference to 
cryptanalysis, so I would take this statement at face value:  NSA has techniques for doing cryptanalysis on certain 
algorithms/protocols out there, but not all, and they would like to steer public cryptography into whatever areas they have 
attacks against.  This makes any NSA recommendation *extremely* suspect.  As far as I can see, the bit push NSA is making these 
days is toward ECC with some particular curves.


The mathematics of ECC is such that one would expect that curves with 
backdoors that are difficult to find, or impossible to find except 
through construction, exist.


Therefore, one should never employ a particular curve recommended by 
NSA, but rather a random or arbitrary curve.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-09-02 Thread James A. Donald

On 2013-09-01 9:11 PM, Jerry Leichter wrote:

Meanwhile, on the authentication side, Stuxnet provided evidence that the 
secret community *does* have capabilities (to conduct a collision attacks) 
beyond those known to the public - capabilities sufficient to produce fake 
Windows updates.


Do we know they produced fake windows updates without assistance from 
Microsoft?




___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] NSA and cryptanalysis

2013-08-31 Thread James A. Donald

On 2013-09-01 4:02 AM, Ray Dillinger wrote:

On 08/30/2013 08:10 PM, Aaron Zauner wrote:

I read that WP report too. IMHO this can only be related to RSA 
(factorization, side-channel attacks).


I have been hearing rumors lately that factoring may not in fact be as 
hard
as we have heretofore supposed.  Algorithmic advances keep eating into 
RSA

keys, as fast as hardware advances do.


So far, not much affect on elliptic keys.

Except that all elliptic keys of the extremely useful gap-diffie-hellman 
group are potentially subject to techniques analogous to those that are 
attacking RSA.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Thoughts about keys

2013-08-31 Thread James A. Donald

On 2013-09-01 11:16 AM, Jeremy Stanley wrote:
At free software conferences, where there is heavy community 
penetration for OpenPGP already, it is common for many of us to bring 
business cards (or even just slips of paper) with our name, E-mail 
address and 160-bit key fingerprint. Useful not only for key signing 
(when accompanied by photo identification), but also simply allows 
someone to retrieve your key from a public keyserver and confirm the 
fingerprint matches the one you handed them. 

The average user is disturbed by the sight a 160 bit hash.

When posting graphic images on my blog, I have to name the image twice, 
once when I store it on my website, and once when I reference it in a 
post.   Despite the fact that the names are meaningful and human 
readable, and the total number of images is not unreasonably large, I 
find it quite difficult to enter exactly the same name the same way 
twice.  Much of the time the image mysteriously fails to appear, even 
though I cannot see any typo, the two spellings right in front of me 
look exactly alike.


The end user's instinctive fear of 160 bit hashes is well founded..


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Petnames Zooko's triangle -- theory v. practice (was Email and IM are...)

2013-08-29 Thread James A. Donald

On 2013-08-28 7:33 PM, ianG wrote:

On 28/08/13 02:44 AM, radi...@gmail.com wrote:
Zooko's triangle, pet names...we have cracked the THEORY of secure 
naming, just not the big obstacle of key exchange.



Perhaps in a sense of that, I can confirm that we may have an elegant 
theory but practice still eludes us.  I'm working with a design that 
was based on pure petnames  ZT, and it does not deliver as yet.


One part of the problem is that there are too many things demanding 
names, which leads to addressbook explosion.  I have many payment 
vehicles, many instruments, and in a fuller system, many identities. 
Each demanding at least one petname.


And so do my many counterparties.  A second part of the problem is 
that petnames are those I give myself to some thing, but in some 
definitional sense, I never export my petnames (which is from which 
they derive their security).  Meanwhile, the owner of another thing 
also has a name for it which she prefers to communicate about, so it 
transpires that there is a clash between her petname and my petname.  
To resolve this I am exploring the use of nicknames, which are 
owner-distributed names, in contrast to petnames which are private names.


Which of course challenges the user even more as she now has two 
namespaces of subtle distinction to manage.  Kerckhoffs rolls six 
times in his grave.


Then rises the notion of secured nicknames, as, if Alice can label her 
favourite payment receptacle Alice's shop then so can Mallory.  Doh! 
Introduction can resolve that in theory, but in practice we're right 
back to the world of identity trickery and phishing.  So we need a way 
to securely accept nicknames, deal with clashes, and then preserve 
that security context for the time when someone wishes to pay the real 
Alice.  Otherwise we're back to that pre-security world known as 
secure browsing.


Then, map the privacy question over the above mesh, and we're in a 
traffic analyst's wetdream.  One minor advantage here is that, 
presswise, we only need to do a little better than Bitcoin, which is 
no high barrier ;)


In sum, I think ZT has inspired us.  It asks wonderfully elegant 
questions, and provides a model to think about the issues. Petnames 
and related things like capabilities answer a portion of those 
questions, but many remain.  Implementation challenges!




Because email addresses and urls are already for the most part non human 
memorable, we already have implementations of Zooko's triangle which 
seem to work fine for the ordinary end user.


The old petname tool, (which has now probably succumbed to bitrot) used 
the browser's bookmark list to store public key data, thus was an 
implementation of Zooko's triangle, that piggy backed on the browser's 
implementation of Zooko's triangle for non human memorable urls.  It 
worked fine for me.


My petnames are still on the browser bar, providing easy access to my 
bank and stuff, though no longer providing security.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] Communicating public keys: A functional specification

2013-08-29 Thread James A. Donald

Communicating public keys:  A functional specification

A functional specification tells us how the user uses it, what he sees, 
and what it does for him.  It does not tell us how we manage to do it 
for him.


The problem is that you want to tell someone over the phone, or on a 
napkin, or face to face, information that will enable his client to 
securely obtain your public key and network location so that end to end 
secured communication can take place.


Also a chatroom's public key and network location.

We do not necessarily protect against security agencies figuring out 
which public key is talking to which public key.  That issue is out of 
the scope of a functional specification, but we somewhat reduce the 
usefulness of this information by allowing people to have lots of public 
keys.  So you probably have one key for activities that show your 
unusual sexual preference, another key for job related activities, 
another key for tax evasion related activities, another key for gun 
running, and yet another for attempts to overthrow the regime.


Face to face:

   Identifying information is nym, face, and location.

   Recipient looks up the nym, sees a bunch of faces grouped by
   geographic area.  Geographic are usually, but not necessarily, has
   some relation to users actual location, and may be very specific, or
   very broad.  It is a tree.  One guy may locate his face at the node
   North America, another at the node New York, North America.  You
   may, of course, employ a well known cartoon character or movie star
   as your avatar instead of your actual face.   Fictional places are
   permitted, but to avoid filling the namespace, not on the tree that
   represents the real planet earth.



Over the phone.

   Recipient looks up phone number.  Finds a bunch of named keys
   associated with the phone number - usually one key or a quite small
   number of named keys.


Web or email.

   Send a link that contains a 256 bit identifier, but the UI should
   not show anyone the identifier.

The ordinary user by default finds himself using at least one key for 
face to face key introductions, a different key or keys for phone 
introductions, and yet more for web or email introductions. If he is 
clever and reads the manual, which no one will ever do, he can use the 
same key for multiple purposes.


All of these named keys have the same behavior when you click on them, 
they are intended to be perceived by the user as being the same sort of 
thing.


He can use the link, the named key, to attempt to contact, or buddy it, 
or bookmark it.


The identifying link information looks like a web link, and is the 
nickname of the public key.  By default the nickname is the petname.  
The user is free to edit this, but usually does not.


When he attempts to contact, this automatically buddies it and/or 
bookmarks it.


When he finds a named key, he may bookmark it, together with one of 
his own private keys - it goes into a datastructure that looks like, and 
works like, browser bookmarks.  He can also put it in his buddy list.


When you look at an item in your buddy list or bookmarks list, You see a 
pair, the other guys key identifying information, and your own key 
identifying information.  You don't see the keys themselves, since they 
look like line noise and will terrify the average user.


When you click on one of these bookmarks, this creates a connection if 
your key is on the other guy's buddy list and he is online.  You can 
chat, video, whatever, end to end secured.  Otherwise, if you are not on 
his buddy list, or he is not presently online, you can send him 
something that is very like an email, but end to end secure.


When you send a bunch of people a text communication, chat like, 
chatroom like, or email like, they are cc or bcc.  If cc, all recipients 
of the communication get links, which they can, if they feel so 
inclined, message, bookmark or buddy.


Text communication software vacuums up and stores all links, so if you 
get an incoming communication from someone whose public key you have not 
buddied or bookmarked, the software will tell you any past contacts you 
may have had with this public key.


Buddied public keys are white listed for immediate online communication, 
Bookmarked and buddied public keys are white listed for offline text 
communication, public keys with past information about contacts are grey 
listed, public keys with no previous contact information are blacklisted.


Because of automatic blacklisting, to contact, you have to /exchange/ keys.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Implementations, attacks on DHTs, Mix Nets?

2013-08-25 Thread James A. Donald

On 2013-08-26 11:04 AM, Christian Huitema wrote:

Of course the data can be signed, encrypted, etc. But the rule of the game
is that the adversary can manufacture as many peers as they want --
something known as the Sybil attack. They can then perform various forms of
denial.


We need, and have not designed, a good distributed reputation system, 
resting on Zooko's triangle and a large global hash tree that provides 
an unfalsifiable past history of the past conduct of key holders.


Such a global hash tree requires, like bitcoin, a solution to the 
Byzantine Generals Problem - a known hard problem that is nonetheless 
soluble.


A distributed reputation system can also provide things like debt based 
money that provides an incentive for seeding - for providing storage of 
interesting content as well as an incentive for upload bandwidth of 
interesting content.  Bittorrent provides an upload bandwidth incentive, 
but no storage incentive.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] What is the state of patents on elliptic curve cryptography?

2013-08-23 Thread James A. Donald

On 2013-08-21 3:38 AM, Perry E. Metzger wrote:

What is the current state of patents on elliptic curve cryptosystems?
(It would also be useful to know when the patents on such patents as
exist end.)

Perry


Such a question will be answered not with light but with darkness.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-02 Thread James A. Donald

On 2010-10-01 3:23 PM, Chris Palmer wrote:

In my quantitative, non-hand-waving, repeated experience with many clients in
many business sectors using a wide array of web application technology
stacks, almost all web apps suffer a network and disk I/O bloat factor of 5,
10, 20, ...


Which does not, however, make bloated RSA keys any the less evil.

All the evils you describe get worse under https.

A badly designed https page is likely to require the client to perform 
lots and lots and lots of RSA operations in order to respond to the user 
click.


A 2048 bit operation takes around 0.01 seconds, which is insignificant. 
 But an https connection takes several such operations.  Lots of https 
connections 


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-09-29 Thread James A. Donald

On 2010-09-28 1:58 PM, Thai Duong wrote:

On Sat, Sep 18, 2010 at 8:43 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz  wrote:

I'm one of the authors of the attack. Actually if you look closer, you'll see
that they do it wrong in many ways.


The FormsAuth as well, not just the view state? �Interesting, I thought they
had that one right, at least.


We promised Microsoft not to release anything before they have a
working patch. Now they have it, so we release the slide we presented
at EKOPARTY. Check it out.

http://netifera.com/research/poet//PaddingOraclesEverywhereEkoparty2010.pdf


Now I understand why one should, counterintuitively, encrypt then MAC.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part III

2010-09-16 Thread James A. Donald

On 2010-09-16 6:12 AM, Andy Steingruebl wrote:

The malware could just as easily fake the whole UI.  Is it really
PKI's fault that it doesn't defend against malware?  Did even the
grandest supporters ever claim it could/did?


That is rather like having a fortress with one wall rather than four 
walls, and when attackers go around the back, you quite correctly point 
out that the wall is only designed to stop attackers from coming in front.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Hashing algorithm needed

2010-09-09 Thread James A. Donald

On 2010-09-09 6:35 AM, Ben Laurie wrote:

What I do in Nigori for this is use DSA. Your private key, x, is the
hash of the login info. The server has g^x, from which it cannot
recover x,


Except, of course, by dictionary attack, hence g^x, being low
entropy, is treated as a shared secret.

and the client does DSA using x.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: towards https everywhere and strict transport security

2010-08-25 Thread James A. Donald

On 2010-08-25 11:04 PM, Richard Salz wrote:

Also, note that HSTS is presently specific to HTTP. One could imagine
expressing a more generic STS policy for an entire site


A really knowledgeable net-head told me the other day that the problem
with SSL/TLS is that it has too many round-trips.  In fact, the RTT costs
are now more prohibitive than the crypto costs.  I was quite surprised to
hear this; he was stunned to find it out.



This is inherent in the layering approach - inherent in our current 
crypto architecture.


To avoid inordinate round trips, crypto has to be compiled into the 
application, has to be a source code library and application level 
protocol, rather than layers.


Every time you layer one communication protocol on top of another, you 
get another round trip.


When you layer application protocol on ssl on tcp on ip, you get round 
trips to set up tcp, and *then* round trips to set up ssl, *then* round 
trips to set up the application protocol.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Has there been a change in US banking regulations recently?

2010-08-17 Thread James A. Donald

On 2010-08-15 7:59 AM, Thor Lancelot Simon wrote:

Indeed.  The way forward would seem to be ECC, but show me a load balancer
or even a dedicated SSL offload device which supports ECC.


For sufficiently strong security, ECC beats factoring, but how strong is 
sufficiently strong?  Do you have any data?  At what point is ECC faster 
for the same security?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 2048-bit RSA keys

2010-08-17 Thread James A. Donald

On 2010-08-17 3:46 PM, Jonathan Katz wrote:

Many on the list may already know this, but I haven't seen it mentioned
on this thread. The following paper (that will be presented at Crypto
tomorrow!) is most relevant to this discussion:
Factorization of a 768-bit RSA modulus,
http://eprint.iacr.org/2010/006


Which tells us that ordinary hobbyist and academic efforts will not be 
able to factor a 1024 bit RSA modulus by brute force until around 2015 
or so - but dedicated hardware and so forth might be able to do it now.


What is the latest news on cracking by ECC by brute force?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-08-06 Thread James A. Donald

On 2010-08-05 11:30 AM, David-Sarah Hopwood wrote:
 Signatures are largely a distraction from the real problem: that software
 is (unnecessarily) run with the full privileges of the invoking user.
 By all means authenticate software, but that's not going to prevent 
malware.


A lot of devices are locked down so that you cannot install bad 
software.  This is somewhat successful in preventing bad software from 
being installed, and highly successful in irritating users.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: A mighty fortress is our PKI, Part II

2010-07-29 Thread James A. Donald

On 2010-07-29 12:18 AM, Peter Gutmann wrote:

This does away with the need for a CA,
because the link itself authenticates the cert that's used.

Then there are other variations, cryptographically generated addresses, ...
all sorts of things have been proposed.

The killer, again, is the refusal of any browser vendor to adopt any of it.


Bittorrent links have this property.  A typical bittorent link looks 
like 
magnet:?xt=urn:btih:2ac7956f6d81bf4bf48b642058d31912479d8d8edn=South+Park+S14E06+201+HDTV+XviD-FQM+%5Beztv%5Dtr=http%3A%2F%2Fdenis.stalker.h3q.com%3A6969%2Fannounce


It is the equivalent of an immutable file in Tahoe.



In the case of FF someone actually wrote the code for them, and it was
rejected.  Without support from browser vendors, it doesn't matter what cool
ideas people come up with, it's never going to get any better.


The browser vendors are married to the CAs

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 1280-Bit RSA

2010-07-12 Thread James A. Donald

On 2010-07-11 10:11 AM, Brandon Enright wrote:
 On Fri, 9 Jul 2010 21:16:30 -0400 (EDT) Jonathan
 Thornburgjth...@astro.indiana.edu  wrote:

 The following usenet posting from 1993 provides an
 interesting bit (no pun itended) of history on RSA key
 sizes.  The key passage is the last paragraph, asserting
 that 1024-bit keys should be ok (safe from key-factoring
 attacks) for a few decades.  We're currently just under
 1.75 decades on from that message.  I think the take-home
 lesson is that forecasting progress in factoring is hard,
 so it's useful to add a safety margin...

 This is quite interesting.  The post doesn't say but I
 suspect at the factoring effort was based on using
 Quadratic Sieve rather than GNFS. The difference in speed
 for QS versus GNFS starts to really diverge with larger
 composites.  Here's another table:

 RSAGNFSQS
 ===
 25643.68   43.73
 38452.58   55.62
 51259.84   65.86
 66467.17   76.64
 76871.62   83.40
 1024   81.22   98.48
 1280   89.46   111.96
 1536   96.76   124.28
 2048   109.41  146.44
 3072   129.86  184.29
 4096   146.49  216.76
 8192   195.14  319.63
 16384  258.83  469.80
 32768  342.05  688.62

The numbers in the second column of this table are the
equivalent strength of symmetrical encryption, that is to
say, against attackers armed with the GNFS, a 3072 bit RSA
key is as tough as a 128 bit symmetric key.

 Clearly starting at key sizes of 1024 and greater GNFS
 starts to really improve over QS.  If the 1993 estimate for
 RSA 1024 was assuming QS then that was roughly equivalent
 to RSA 1536 today.  Even improving the GNFS constant from
 1.8 to 1.6 cuts off the equivalent of about 256 bits from
 the modulus.

 The only certainty in factoring techniques is that they
 won't get worse than what we have today.

Progress in cracking elliptic curves, however, does not seem
to be happening, probably because elliptic curves are truly
irregular.

 How do elliptic curves compare to RSA today?

According to
http://paper.ijcsns.org/07_book/200909/20090902.pdf

RSA ECC Sym
1024160 80
2048224 112
3072256 128
4096280 140

That is to say, a 3072 bit RSA key is as tough as an ECC key
based on a 256 bit field, which is as tough as a 128 bit
symmetric key.

ECC cryptosystems on 256 bit field are practical today.  3072
bit RSA systems are not.

It looks to me that Moore's law plus GNFS has decisively
tipped the balance in favor of elliptic curves - and if one
has patent worries, good elliptic curve algorithms were
published more than fifteen years ago.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: The EC patent issues discussion

2010-04-22 Thread James A. Donald

On 2010-04-22 9:17 AM, Paul Hoffman wrote:

At 9:40 PM -0400 4/20/10, Victor Duchovni wrote:
   

EC definitely has practical merit. Unfortunately the patent issues around
protocols using EC public keys are murky.
 

This is starting to turn around. More vendors are questioning the murk. Please 
seehttp://tools.ietf.org/html/draft-mcgrew-fundamental-ecc.
   


To summarize that document:  All the EC stuff that you need was 
published more than fifteen years ago,

 therefore you cannot be violating patents if you stick to that stuff.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question regarding common modulus on elliptic curve cryptosystems

2010-03-25 Thread James A. Donald

On 2010-03-22 11:22 PM, Sergio Lerner wrote:
Commutativity is a beautiful and powerful property. See On the power 
of Commutativity in Cryptography by Adi Shamir.
Semantic security is great and has given a new provable sense of 
security, but commutative building blocks can be combined to build the 
strangest protocols without going into deep mathematics, are better 
suited for teaching crypto and for high-level protocol design. They 
are like the Lego blocks of cryptography!


Now I'm working on an new untraceable e-cash protocol which has some 
additional properties. And I'm searching for a secure  commutable 
signing primitive.


The most powerful primitive, from which all manner of weird and 
wonderful protocols can be concocted, are gap diffie helman groups.  
Read Alexandra Boldyreva Threshold Signatures, Multisignatures, and 
Blind Signatures based on Gap-Diffie-Helman Group Signatures.


I am not sure what you want to do with commutativity, but suppose that 
you want a coin that needs to be signed by two parties in either order 
to be valid.


Suppose we consider call the operation that combines two points on an 
elliptic curve to be generate a third point multiplication and division, 
so that we use the familiar notation of exponentiation, thereby 
describing elliptic point crypto systems in the same notation as prime 
number crypto systems (a notation I think confusing, but everyone else 
uses it)


Suppose everyone uses the same Gap Diffie Helman group, and the same 
generator g.


A valid unblinded coin is the pair {u, (u^(b*c)}, yielding a valid DDH 
tuple {g, g^(b*c), u, u^(b*c)}, where u is some special format (not a 
random number)


Repeating in slightly different words.  A valid unblinded coin is a coin 
that with the joint public key of Bob and Carol yields a valid DDH 
tuple, in which the third element of the tuple has some special form.


Edward wants Bob and Carol to give him a blinded coin.  He already knows 
some other valid coin, {w, w^(b*c)).  He generates a point u that 
satifies the special properties for a valid coin, and a random number 
x.  He asks Bob and Carol to sign u*(w^(-x)), giving him a blinded coin, 
which he unblinds.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Question regarding common modulus on elliptic curve cryptosystems AND E-CASH

2010-03-25 Thread James A. Donald

On 2010-03-23 1:09 AM, Sergio Lerner wrote:
I've read some papers, not that much. But I don't mind reinventing the 
wheel, as long as the new protocol is simpler to explain.

Reading the literature, I couldn't  find a e-cash protocol which :

- Hides the destination / source of payments.
- Hides the amount of money transferred.
- Hides the account balance of each person from the bank.
- Allows off-line payments.
- Avoids giving the same bill to two different people by design. 
This means that the protocol does not need to detect the use of cloned 
bills.
- Gives each person a cryptographic proof of owning the money they 
have in case of dispute.


I someone points me out a protocol that manages to fulfill this 
requirements, I'd be delighted.
I think I can do it with a commutative signing primitive, and a 
special zero-proof of knowledge.


Gap Diffie Helman gives you a commutative signing primitive, and a 
zero-proof of knowledge.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Security of Mac Keychain, Filevault

2009-11-08 Thread James A. Donald

Jerry Leichter wrote:
 NFC?

Near Field Communications - the wireless equivalent of
whispering in someone's ear.  Ideally, a NFC chip should
only be able to talk to something that is an inch or so
away, and it should be impossible to eavesdrop from more
than a foot or so away.

Lots of people plan that smart phones shall do financial
transactions through NFC.

http://www.intomobile.com/2009/04/10/visa-launches-nfc-
service-in-Malaysia.html
: : Malaysians can now use their Nokia (NYSE:
: :	NOK) 6212 to make near-field Visa payments  
: :	just wave your phone in front of a sensor and

: : bam, instant buy in over 1,800 shops.

These transactions are reversible and made through
authorized retailers, hence, like the widely shared
secret on a credit card, really need very little
security.  Anyone to anyone irreversible transactions
would need considerably higher security, but there
appear to be considerable legal and regulatory obstacles
to that.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [Barker, Elaine B.] NIST Publication Announcements

2009-09-30 Thread James A. Donald

 The Haber  Stornetta scheme provides a timestamping
 service that doesn't require terribly much trust,
 since hard to forge widely witnessed events delimit
 particular sets of timestamps. The only issue is
 getting sufficient granularity.

 I don't know if their scheme was patented in Germany.
 It was in the U.S., though I think that at least some
 of the patents expire within the year.

In looking this up, I have noticed a pile of patents
that patent something equivalent or near equivalent to a
patricia hash tree, or elaborately disguised patricia
trees, or something suspiciously similar to a patricia
hash tree, and various special cases of it, and
applications of it, without using the name patricia
hash tree

Since they seem reluctant to use the name patricia hash
tree I suspect  that there is already a pile of prior
art, but I could not find any, though I am fairly sure
the method is widely known.  Also, wherever there is a
pile of patents, there is usually a pile of prior art.

Lest even more patents of the patricia hash tree be
published, I would like to describe the method here,
though it surely must be described somewhere else,
probably long ago.

Suppose we have a lot of records, each with a key that
makes collision improbable or impossible,  We assemble
them in a patricia tree, with each node of the patricia
tree containing a hash of its child nodes.  The root of
the patricia tree then, like a tiger hash, uniquely
identifies the complete data set.  If we have multiple
copies of the data set, this data structure allows us to
not only ensure that both copies are identical, but if
there are small differences between them, such as
recently added records, it allows us to efficiently find
the differences, and thus efficiently bring the two data
sets into agreement.

It also allows us to prove that a given record was part
of a particular data set at a particular time.

Suppose the high order part of the key identifies the
high order part of the time, followed by the id of the
particular organization holding those records.  The
upper parts of the patricia hash tree are partially
shared, peer to peer, similarly to file sharing with a
tiger hash.  Each participating organization keeps the
nodes that relate to it. The lower parts are not shared
except as needed.

In this case, there will be a small set of top nodes of
the tree that cease to change, because they only rely on
keys earlier than a certain date, and this small and
very slowly growing set of top nodes proves the complete
state of the tree at all earlier dates.

Then each organization can prove to all or any of the
others that it had a particular record, or particular
set of records, at a particular time, to the granularity
of the time that is the high order part of the key.

Where some or all of the data needs to be shared by some
or all of the organizations, organizations can rapidly
and efficiently identify any disagreements, and when
they are in agreement, rapidly and efficiently prove to
themselves, and to everyone else, and record for all
time, that they are in agreement, since a small number
of the topmost nodes of the tree proves the state of the
tree at each and all times that contributed to those
nodes.

The structure serves for attestation and sharing, and
since attestation usually involves sharing, and sharing
attestation, the scope for patenting this structure over
and over again in one disguise or another to be applied
to one task or another that involves sharing and or
attestation is limited only by the boundless imagination
of patent lawyers.  One can also add horizontal and
backwards hash relationships between nodes that serve
little practical purpose other than allowing one to have
a single rapidly changing node node attesting instead of
a small set of nodes, and allowing it to be nominally
something other than a patricia hash tree.

Thus, for example, instead of using forty or so nodes to
attest for the state of million organizations over a
billion time periods, one can use a hash of those forty
nodes, and there are no end of different ways one can
hash those forty or so nodes together.  But under that
hash, it is still a patricia hash tree doing the actual
work of gluing the data together.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Bringing Tahoe ideas to HTTP

2009-09-15 Thread James A. Donald

Ivan Krsti  wrote:
What you're proposing amounts to a great deal of complex and complicated 
cryptography. If it were implemented tomorrow, it would take years for 
the most serious of implementation errors to get weeded out, and some 
years thereafter for proper interoperability in corner cases. In the 
meantime, mobile device makers would track you down for the express 
purpose of breaking into your house at night to pee in your Cheerios, as 
retaliation for making them explain to their customers why their mobile 
web browsing is either half the speed it used to be, or not as secure as 
on the desktop, with no particularly explicable upside.


The ideas used in Tahoe are useful tools that can be used to solve 
important problems.


It is true that just dumping them on end users and hoping that end users 
will use them correctly to solve important problems will fail


It is our job to apply these tools, not the end user's job, the hard 
part being user interface architecture, rather than cryptography protocols.


Yurls are one example of an idea for a user interface wrapping
Tahoe like methods to solve useful problems.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-09-09 Thread James A. Donald

Steven Bellovin wrote:
Several other people made similar suggestions.  They all boil down to 
the same thing, IMO -- assume that the user will recognize something 
distinctive or know to do something special for special sites like 
banks. 


Not if he only does it for special sites like banks, but if something 
special is pretty widely used, he will notice when things are different.


Peter, I'm not sure what you mean by good enough to satisfy security 
geeks vs. good enough for most purposes.  I'm not looking for 
theoretically good enough, for any value of theory; my metric -- as a 
card-carrying security geek -- is precisely good enough for most 
purposes.  A review of user studies of many different distinctive 
markers, from yellow URL bars to green partial-URL bars to special 
pictures to you-name-it shows that users either never notice the 
*absence* of the distinctive feature


I never thought that funny colored url bars for banks would help, and 
ridiculed that suggestion when it was first made, and said it was merely 
an effort to get more money for CAs, and not a serious security proposal


The fact that obviously stupid and ineffectual methods have failed is 
not evidence that better methods would also fail.


Seems to me that you are making the argument We have tried everything 
that might increase CA revenues, and none of it has improved user 
security, so obviously user security cannot be improved.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Bringing Tahoe ideas to HTTP

2009-09-08 Thread James A. Donald

Nicolas Williams wrote:
  One possible problem: streaming [real-time] content.

Brian Warner wrote:
 Yeah, that's a very different problem space. You need
 the low-alacrity stuff from Tahoe, but also you don't
 generally know the full contents in advance. So you're
 talking about a mutable stream rather than an
 immutable file.

Not mutable, just incomplete.

Immutable streaming content needs a tiger hash or a
patricia hash, which can handle the fact that some of
the stream will be lost in transmission, and that one
needs to validate the small part of the stream that one
has already received rather than waiting for the end.

 upgrade bundles are produced by a very strict process,
 and are rigidly immutable [...] For software upgrades,
 it would reduce the attack surface significantly.

But how does one know which immutable file is the one
that has been blessed by proper authority?  Although
Version 3.5.0 is immutable, what makes it Version 3.5.0,
rather than haxx350, is that some authority says so -
the same authority that said that your current version
is 3.4.7 - and it should not even be possible to get a
file named by some different authority as an upgrade.

Of course, one would like a protocol that committed the
authority to say the same thing for everyone, and the
same thing for all time, saying new things in future,
but never being able to retroactively adjust past things
it has said.

In other words, upgrades should be rooted in an append
only capability.

I suppose this could be implemented as a signed dag of
hashes, in which whenever one upgraded, one checked that
the signed dag that validates the upgrade is consistent
with the signed dag that validated the current version.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-09-04 Thread James A. Donald

Steven Bellovin wrote:
This returns us to the previously-unsolved UI problem: how -- with 
today's users, and with something more or less like today's browsers 
since that's what today's users know -- can a spoof-proof password 
prompt be presented?


When the user clicks on a button generated by a particular special kind 
of html tag, perhaps
loginbutton logintype=SRP 
loginurl=/customers/loginpage.scriptlogin/loginbutton


A not quite rectangular login form which is not an html page rolls out 
of the url, with a motion like a blind or toilet paper unrolling, and 
partially covers the browser chrome, thus associating the form  with the 
browser and the url, rather than the web page.


The form will be decorated and prominently watermarked in manner that is 
customizable by the end user, and if the end user does not customize it, 
which he probably will not, a customization was randomly selected at 
install time.


A phisher could do a flash animation that looks almost like the form 
rolling out, but the flash animation will not roll out of the url, and 
will not partially cover the browser chrome, and is unlikely to match 
the customization.


If the url is http://exampledomain.com/somedirectory/somepage.html

Then the content of the login form is controlled by script at 
login://exampledomain.com//customers/loginpage.script


The login form will be associated with a public key.  If the user has 
logged in before using this browser, there will be an entry in his 
bookmarks list for the url *and* public key


If the login form is the browser's bookmark list, the title on the login 
form will be the petname, that is to say, the name under which it 
appears in the bookmark list.


If the login form is *not* in the browser's bookmark list, the title on 
the login form will be No Previous Login at this site using this 
browser by this user, with script supplied title and or certificate 
supplied title somewhere else in smaller print.


The loginpage.script will tell the browser what fields and fieldnames to 
request from the user - typically username and password, but this needs 
to be scriptable - for example it could be credit card number, etc.  The 
script will tell the server what database table and what database fields 
to associate these user supplied fields with when the client responds.


Peter Gutmann has, he believes, a much simpler solution.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] a crypto puzzle about digital signatures and future compatibility

2009-08-31 Thread James A. Donald

Zooko Wilcox-O'Hearn wrote:

On Wednesday,2009-08-26, at 19:49 , Brian Warner wrote:

Attack B is where Alice uploads a file, Bob gets the filecap and  
downloads it, Carol gets the same filecap and downloads it, and  
Carol desires to see the same file that Bob saw. ... The attackers  
(who may be Alice and/or other parties) get to craft the filecap  
and the shares however they like. The attackers win if Bob and  
Carol accept different documents.


Right, and if we add algorithm agility then this attack is possible  
even if both SHA-2 and SHA-3 are perfectly secure!


Consider this variation of the scenario: Alice generates a filecap  
and gives it to Bob.  Bob uses it to fetch a file, reads the file and  
sends the filecap to Carol along with a note saying that he approves  
this file.  Carol uses the filecap to fetch the file.  The Bob-and- 
Carol team loses if she gets a different file than the one he got.


If Bob and Carol want to be sure they are seeing the same file, have to
use a capability to an immutable file.

Obviously a capability to an immutable file has to commit the file to a 
particular hash algorithm.


(Using capability in the sense of capabilities as cryptographic data, 
capabilities as sparse addresses in a large address space identifying 
communication channels)


So the leading bits of the capability have to be an algorithm 
identifier.  If Bob's tool does not recognize the algorithm, it fails, 
and he has to upgrade to a tool that recognizes more algorithms.


If the protocol allows multiple hash types, then the hash has to start 
with a number that identifies the algorithm.  Yet we want that number to 
comprise of very, very few bits.


This is almost precisely the example problem I discuss in 
http://jim.com/security/prefix_free_number_encoding.html


Now suppose an older algorithm is broken, and Alice wants to show Bob 
one file, and Carol another, and pretend they are the same file.


So she just uses the older algorithm.  Bob and Carol, however, have the 
newer tool, which if the older algorithm is thoroughly broken, will 
probably pop up deprecated algorithm, which Bob and Carol will 
cheerfully click through.


If however, the older algorithm has been broken a good long time, and we 
are well past the transition period, and no one should be using the 
older algorithm any more, except to wrap old format immutable files 
inside new format immutable files, then Bob's tool will fail.  Problem 
solved.


Yes, during the transition period, people can be hosed, especially if 
they see nag messages so often that they click them off, as they 
probably will, but that is true no matter what.  If an algorithm gets 
broken, people can be hurt during the transition.  The point, however, 
is to have a smooth transition, even if security sucks during the 
transition.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 and Git (was Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git)

2009-08-25 Thread James A. Donald



Perry E. Metzger wrote:

Yet another reason why you always should make the crypto algorithms you
use pluggable in any system -- you *will* have to replace them some day.


Ben Laurie wrote:

In order to roll out a new crypto algorithm, you have to roll out new
software. So, why is anything needed for pluggability beyond versioning?


New software has to work with new and old data files and communicate 
with new and old software.


Thus full protocol negotiation has to be built in to everything from the 
beginning - which was the insight behind COM and the cure to DLL hell.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] Tahoe-LAFS key management, part 2: Tahoe-LAFS is like encrypted git

2009-08-19 Thread James A. Donald
[*] Linus Torvalds got the idea of a Cryptographic Hash Function  
Directed Acyclic Graph structure from an earlier distributed  
revision control tool named Monotone.
OT trivia: The idea actually predates either monotone or git;  
opencm (http://opencm.org/docs.html) was using a similiar technique  
for VCS access control a year or two prior to monotone's first  
release.


Note that I didn't say Monotone invented it.  :-)  Graydon Hoare of  
Monotone got the idea from a friend of his who, as far as we know,  
came up with it independently.  I personally got it from Eric Hughes  
who came up with it independently.  I think OpenCM got it from the  
Xanadu project who came up with it independently.  :-)


Getting back towards topic, the hash function employed by Git is showing 
signs of bitrot, which, given people's desire to introduce malware 
backdoors and legal backdoors into Linux, could well become a problem in 
the very near future.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-18 Thread James A. Donald

James A. Donald jam...@echeque.com writes:

[Incredibly complicated description of web scripting plumbing deleted]


Peter Gutmann wrote:

We seem to be talking about competely different things here.  For a typical
application, say online banking, I connect to my bank at www.bank.com or
whatever, the browser requests my credential information, and the TLS-SRP or
TLS-PSK channel is established. That's all.  There's no web application
framework and PHP and scripting and other stuff at all, in fact I can't even
see how you'd inject this into the process.


I cannot see how you could create a bank web page without a web 
application framework (counting mod-php as a very primitive web 
application framework) and scripting and a database, which scripting and 
database has to know who it is is that logged in - which is indeed a 
great deal of complicated plumbing to ensure that the script knows at 
script execution time, *after* the connection has been made, which user, 
which database primary key, is connected.  The information about which 
user, which database primary key is logged in, has to be passed up 
through one layer after another and from one process to another.  The 
toe bone is connected to foot bone, the foot bone is connected to the 
ankle bone, the ankle bone is connected ... The plumbing really is that 
complicated.



Further, if we do the SRP dance every single page, it is a huge performance
hit, with many additional round trips. One loses about 20 percent of one's
market share for each additional round trip.



You only do it once when the TLS session is set up, it's exactly as for
standard TLS except that instead of PKI-based non-authentication you use
cryptographic mutual authentication.  How do you get an SRP exchange for every
web page?


Because keep-alive usually fails for plumbing reasons, standard TLS 
usually does the PKI-based non-authentication dance every page, 
resulting in additional round trips, resulting in painfully bad 
performance for SSL web sites such as 
https://www.cia.gov/library/publications/the-world-factbook/



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-12 Thread James A. Donald

Thomas Hardjono wrote:
 I'm not sure if the Chrome folks would be prepared to
 ship their browser without any CA certs loaded,

Excessive distrust is inconvenient, excessive trust is
vulnerable.  It is better to remedy flaws by expanding
functionality rather than restricting it.

On the one hand, something like Verisign is very useful
to signify that an entity that calls itself a bank is in
fact regarded as a bank by governments and other major
banks, on the other hand, it is pretty useless for
designating membership of a group to other members of
the group, which is the major function of client side
certificates.

The number of globally important entities is necessarily
small, therefore a global namespace of globally unique
human memorable names, (such as Bank Of America) works
well for them.   The number of entities that have or
need keys is quite large, therefore Zooko's triangle
applies - globally unique human memorable names work
very badly for the vast majority of keyholders,
therefore a business whose job is enforcing global
uniqueness of human memorable names (such as Verisign)
is going to be a pain to deal with, for it is trying to
do something that really cannot be done, therefore in
practice will merely make it sufficiently difficult for
clients that scammers do not bother.

Even for banks, globally unique names are problematic.
A remarkably large number of banks are called something
National Bank, or First National Bank of something.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-12 Thread James A. Donald

James A. Donald jam...@echeque.com writes:
  [In order to implement strong password based
  encryption and authentication] on the server side,
  we need a request object in the script language that
  tells the script that this request comes from an
  entity that established a secure connection using
  shared secrets associated with such and such a
  database record entered in response to such and such
  a web page

Peter Gutmann wrote:
 Ah, that is a good point, you now need the credential
 information present at the TLS level rather than the
 tunneled-protocol level (and a variation of this,
 although I think one that way overcomplicates things
 because it starts diverting into protocol redesign, is
 the channel binding problem (see RFC 5056 and,
 specific to TLS,
 draft-altman-tls-channel-bindings-05.txt)).  On the
 other hand is this really such an insurmountable
 obstable?

Consider what would be involved in building the UI into
the Google browser, and also building the necessary
scripting support into Web2Py on Google App Engine.  It
is not a small job.

 I don't really see why you'd need complex scripting
 interfaces though, just return the shared-secret
 value associated with this ID in response to a
 request from the TLS layer.

This request is issued when the connection is being
established, before the URL is specified.  So it is
impossible to service that request from the script that
generates the web page.   So where are we servicing that
request?  Presumably, need to service it somewhere
within the Web Application Framework, for example within
Mod PHP or Web2Py.

Further, some applications, for example banks and share
registries, typically have several different ID tables
at a single domain, and several different kinds of
shared human memorable secret information associated
with each ID.

And, having established that association, then when the
URL is specified, and the script associated that URL is
finally invoked by the Web Application Framework, then
that script needs to be invoked with the relevant ID, or
better, the script then needs to be provided with a
database cursor pointing at the relevant ID.

Further, if we do the SRP dance every single page, it is
a huge performance hit, with many additional round
trips. One loses about 20 percent of one's market share
for each additional round trip.

So we only want to do the SRP dance on session
establishment, only want to do it once per human
session, once per logon, not once per TLS session.

Which means that the TLS layer has to cache the the
transient strong shared secret constructed from the weak
durable human memorable secret for the duration of the
Web Application Framework's logon and logoff and provide
the cached database cursor to the web page script at
every page request during a single logon session, which
requires a higher level of integration between TLS and
the Web Application Framework than either one was
originally designed for.

Which means a significant amount of work integrating
this stuff with any given web application framework.

Further, suppose, as is typical with banks, a given
domain name hosts multiple ID tables each with different
kinds of shared secret information.  In that case, we
can obtain multiple different kinds of SRP logons, each
relevant to certain web pages but not others, each with
different privilege levels, and the framework has to
enforce that, has to provide to each web page
information about the logon type, and ensure that
inappropriate web pages are never invoked at all, but
are 403ed when the user attempts to access a url through
a logon of inappropriate type.

We cannot rely on the server side web page script to 403
itself in response to inappropriate logon type, since
when a new kind of logon was introduced, no one would
ever go back and make sure that all the old web pages
correctly checked the logon type.  If the web page
script contains a line of code that says If such and
such, then do a 403,

then sooner or later someone will delete that code and
say Hey, it still works just fine..

This is starting to sound depressingly like a great deal
of work rewriting lots of complex, bugridden stuff in
web application frameworks that are already designed to
handle logons in a quite different way.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-11 Thread James A. Donald

--
 James A. Donald jam...@echeque.com writes:
 For password-authenticated key agreement such as
 TLS-SRP or TLS-PSK to work, login has to be in the
 chrome.

Peter Gutmann wrote:
 Sure, but that's a relatively tractable UI problem

Indeed.  You know how to solve it, and I know how to
solve it, yet the solution is not out there.

As you say, shared secrets should be entered a form that
implements password-authenticated key agreement such as
TLS-SRP or TLS-PSK, that cannot easily be spoofed, that
is clearly associated with the browser and with a
particular url and web page (you suggest that the form
should roll out of the browser bar with an eye catching
motion and land on top of the web page) and an encrypted
connection should be established by that shared
knowledge, which cannot be established without that
shared knowledge.

This, however, requires both client UI software, and an
api to server side scripts such as PHP, Perl, or Python
(the P in LAMP).  On the server side, we need a request
object in the script language that tells the script that
this request comes from an entity that established a
secure connection using shared secrets associated with
such and such a database record entered in response to
such and such a web page, an object to which the script
generating a page can associate data that persists for
the duration of the session - an object that has session
scope rather than page scope, scope longer and broader
than that of the thread of execution that generates the
page, but shorter and narrower than that of the database
record containing the shared secrets, a script
accessible object that can only be associated with one
server, one server side process and one server side
thread at a time.  This is non trivial to implement in
an environment where servers are massively
multithreaded, and often massively multiprocess.

 Certificates on the other hand are an apparently
 intractable business, commercial, user education,
 programming, social, and technical problem.  I'd much
 rather try and solve the former than the latter.

What makes certificates such a problem is that there is
someone in the middle issuing the certificate - usually
someone who does not know or trust either of the
entities trying to establish a trust relationship.

While certificates frequently makes cryptography
unnecessarily painful and complicated, certificate issue
offers the opportunity to make money out of providing
encryption by being that someone in the middle, hence
the remarkable enthusiasm for this technology, and
stubborn efforts to apply it to cases where its value is
limited, and it is far from being the most convenient,
practical, and straightforward solution.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-09 Thread James A. Donald

Thomas Hardjono wrote:

Having worked at a large CA for along time (trying to push for client-side 
certs with little luck), here are some thoughts on what Chrome could provide:


There are use cases where a centralized authority is useful.
Client side is not one of them.

Typical usage is is this client one of our gang?.
Obviously the CA just gets in the way.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Client Certificate UI for Chrome?

2009-08-06 Thread James A. Donald

Ben Laurie wrote:

So, I've heard many complaints over the years about how the UI for
client certificates sucks. Now's your chance to fix that problem -
we're in the process of thinking about new client cert UI for Chrome,
and welcome any input you might have. Obviously fully-baked proposals
are more likely to get attention than vague suggestions.


The fundamental problem with certificates is getting them.

OK, suppose you hit a web site that wants a client side certificate, 
maybe it wants a claimed email address with proof that you can receive 
email at that address, maybe it wants proof you are over 18, maybe it is 
run by the company you work for and wants proof you are an employee and 
wants to know which employee you are.  Maybe it wants evidence that you 
a member in good standing of the committee to slay infidels.


If we suppose your browser *has* such a certificate, and merely needs 
permission from you, the user, to show it, then the problem is 
relatively easy - which however does not stop existing browsers from 
fouling up mightily, perhaps because they are designed around verisign's

business model, rather than anything the user might actually want to
do.

But the problem is much harder, much much harder, if we suppose you do 
*not* have the certificate.


I will ignore the easy problem, because I am sure that anyone worth 
talking to can figure out the solution, even though existing browser 
writers have failed to do so.


OK. Hard Problem:  You the user have hit a website that wants a 
certificate with certain characteristics, and either you do not have it, 
or you have it somewhere, but your browser does not know you have it, 
perhaps because it is on another browser on another computer.


It appears to me that existing solutions to this problem are designed 
around Verisign's business model, rather than user needs.  If a client 
certificate is to identify you to examplecorp as an employee of 
examplecorp, why does Verisign need to get involved?  It's easier for 
examplecorp to issue its own @#$%^ certificates.


So, assuming you are pretty smart, as I know you to be, and assuming 
your boss is not evil and not in Verisign's pocket, which I do not know 
at all ...


So where *would* you have a certificate?  Where would you keep it, and 
what would it look like?


The kind of things that people are used to storing in a browser are 
bookmarks:  Bookmarks have the convenient property that they implement 
Zooko's triangle: petname, nickname, and guid.  They also have the 
property that if you click on them they take you somewhere.  So a 
certificate should act like some kind of smart bookmark and look to the 
user like a smart bookmark,  which if clicked on should bring you to 
your logged in web page with the authority issuing the certificate. 
Your secret key is something like a a secret link or bookmark that 
automatically logs you in to something like your facebook page, and your 
public key is something like a link or bookmark that enables other 
people to view something like your facebook page.  Maybe it is your 
employee page at examplecorp, which shows any records pertaining to you 
in the company database, some of which, such as contact information, are 
editable by you, but most of which are not.


And the something like your facebook page as seen by others is almost 
the same link the live authorization that your certificate is still valid.


So how do you *get* a certificate?  Suppose, for example, your 
certificate identifies you to your company, in which case your boss 
probably gave it to you.  What would he give you? Well, obviously, he 
would give you a username and password, or more likely you would create 
an account, a username and password, which he would then authorize.


Which username and password would I expect enable you to get to that 
logged in web page with the authority issuing the certificate, in this 
case some location on the company web server.  And you get to that web 
page, you would then get an install certificate title dialog, and if 
you accept, get something that looks and acts like a bookmark, though it 
is in fact a company issued certificate, certifying that you are 
username, where username is also a primary key in the company employee 
database.


But the trouble with this, of course, is that usernames and passwords 
can be phished.


The solution to that is password login and account creation in the 
chrome, not in the web page implementing password-authenticated key 
agreement, so that the phisher can gain nothing, so long as the user 
attempting to use the chrome login facilities, rather than html web page 
login facilities.


It should have been obvious that logging in on a user interface provided 
by a web page, provided by html code, was entirely insecure - the 
problem of spoofed logins was well known at the time. So what we
needed, from day one, was a secure login that was in the browser chrome, 
not the web page - and no other form of 

Re: Fast MAC algorithms?

2009-08-02 Thread James A. Donald

Joseph Ashwood wrote:

RC-4 is broken when used as intended.

...

If you take these into consideration, can it be used correctly?


James A. Donald:

Hence tricky


Joseph Ashwood wrote:
By the same argument a Viginere cipher is tricky to use securely, same 
with monoalphabetic and even Ceasar. Not that RC4 is anywhere near the 
brokenness of Viginere, etc, but the same argument can be applied, so 
the argument is flawed.


You cannot use a Viginere cipher securely. You can use an RC4 cipher 
securely:  To use RC4 securely discard the first hundred bytes of 
output, and renegotiate the key every gigabyte.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Fast MAC algorithms?

2009-07-26 Thread James A. Donald

From: Nicolas Williams nicolas.willi...@sun.com

For example, many people use arcfour in SSHv2 over AES because arcfour
is faster than AES.


Joseph Ashwood wrote:
I would argue that they use it because they are stupid. ARCFOUR should 
have been retired well over a decade ago, it is weak, it meets no 
reasonable security requirements,


No one can break arcfour used correctly - unfortunately, it is tricky to 
use it correctly.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 112-bit prime ECDLP solved

2009-07-16 Thread James A. Donald

Tanja Lange wrote:
So with about 1 000 000 USD and a full year you would get 122 bits 
already now and agencies have a bit more budget than this! Furthermore,

the algorithm parallelizes extremely well and can handle a batch of 100
targets at only 10 times the cost. 


No it cannot handle a bunch of a hundred targets at only ten times the 
cost.  It is already parallelized.  A hundred targets is a hundred times 
the cost.


But let us not think small.  Suppose the president says Break James 
Donald's key.  I don't care how much it costs.  The sky is the limit 
and they devote the entire US gross national product for a year to 
breaking James Donald's key in a year.


Then they can break a 170 bit key.

But I rather doubt that they will.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: 112-bit prime ECDLP solved

2009-07-14 Thread James A. Donald

Hi all,

We are pleased to announce that we have set a new record for the elliptic
curve discrete logarithm problem (ECDLP) by solving it over a 112-bit
finite field. The previous record was for a 109-bit prime field and
dates back from October 2002.


 See for more details our announcement at 
http://lacal.epfl.ch/page81774.html.


Computing power doubles every 18 months to two years, so the required EC 
length should gain a bit every year or every nine months.


Which suggests that existing deployments should default to 128 bits. 
with 160 bits being overkill.  Of course overkill does not cost much. 
If one shoots someone the head, it is wise to follow up with a second 
shot through the head at very short range just to be on the safe side.


YearBreakable keys.
2009112
2010113
2015117
2020121
2025124

I am assuming a rapid rate of progress, in which case line widths halve 
every four years.


In which case Moore's law breaks in 2033 when we get nanometer line 
widths, for lines will then be molecules - probably carbon nanotubes.


2033130

Subsequent expansions in computing power will involve breaking up 
Jupiter to build really big computers, and so forth, which will slow 
things down a bit.


So 144 bit EC keys should be good all the way to the singularity and a 
fair way past it.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Warning! New cryptographic modes!

2009-05-21 Thread James A. Donald

Jerry Leichter wrote:
Consider first just updates.  Then you have exactly the same problem as 
for disk encryption:  You want to limit the changes needed in the 
encrypted image to more or less the size of the change to the underlying 
data.  Generally, we assume that the size of the encrypted change for a 
given contiguous range of changed underlying bytes is bounded roughly by 
rounding the size of the changed region up to a multiple of the 
blocksize.  This does reveal a great deal of information, but there 
isn't any good alternative. 


You specified a good alternative:  Encrypted synchronization of a file 
versioning system:


Git runs under SSH.

Suppose the files are represented as the original values of the files, 
plus deltas.  If the originals are encrypted, and the deltas encrypted, 
no information is revealed other than the size of the change.


Git is scriptable, write a script to do the job.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Security through kittens, was Solving password problems

2009-02-25 Thread James A. Donald

John Levine jo...@iecc.com writes:
 Clever though this scheme [kittens] is, man-in-the
 middle attacks make it no better than a plain SSL
 login screen.

Peter Gutmann wrote:
 You don't even need a MITM, just replace the site
 image on your phishing site with either a broken-
 image picture or a message that your award-winning
 site-image software is being upgraded and will be back
 soon and it's rendered totally ineffective.

Assume we have this great process, perhaps
password-authenticated key agreement, perhaps kitten
based, that guarantees we are phish proof it the user
actually uses it.

How do we make the workflow and user interface so that
if the user is asked to bypass our great process, he
hears alarm bells?

When it comes to workflows, the WoW interface seems to
work quite well

WoW accounts control WoW gold, typically $50 to $100
worth, so WoW accounts are a popular phish target:

An investigation of your World of Warcraft
account has found strong evidence that the
account in question is being sold or traded. As
you may not be aware of, this conflicts with
Blizzard's EULA under section 4 Paragraph B
which can be found here:
WoW - Legal - End User License
Agreement

and Section 8 of the Terms of Use found here:
WoW - Legal - Terms of Use
The investigation will be continued by Blizzard
administration to determine the action to be
taken against your account. If your account is
found violating the EULA and Terms of Use, your
account can, and will be suspended/closed/or
terminated.

In order to keep this from occurring, you should
immediately verify that you are the original
owner of the account.

To verify your identity please visit the
following webpage:
https://www.worldofwarcraft.com/login/login?service=https%3A%2F%2Fwww...
Only Account Administration will be able to
assist with account retrieval issues. Thank you
for your time and attention to this matter, and
your continued interest in World of Warcraft.

This phish used a flaw in the official WoW website to
redirect an https login with WoW to an https login with
the scammer site.

The interesting thing is that it and similar phishes do
not seem to have been all that successful - few people
seemed to notice at all, the general reaction being to
simply hit the spam key reflexively, much as people
click away popup warnings reflexively, and are
unaware that there ever was a popup.

Most accounts are lost through keyloggers - rather
phishing, the attacker has to take over the end user's
computer completely.

Why the attack resistance?  I conjecture that:

1.  User normally enters his password in an environment
 that looks nothing like a web page, so being asked
 to do so in a web page automatically makes him
 suspicious - it is a deviation from normal workflow

2.  Blizzard never communicates by email, so receiving
 email from blizzard automatically makes the user
 suspicious.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: full-disk subversion standards released

2009-02-13 Thread James A. Donald

Ben Laurie wrote:

If I have data on my server that I would like to stay on my server and
not get leaked to some third party, then this is exactly the same
situation as DRMed content on an end user's machine, is it not?


No.

You want to keep control of the information on your server.  DRM wants 
to deny the end user control of the information on the end user's machine.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Why the poor uptake of encrypted email?

2008-12-18 Thread James A. Donald

Nicolas Williams wrote:
 Providing a suitable e-mail security solution for the
 masses strikes me as more important than providing
 anonymity to the few people who want or need it.  Not
 that you can't have both, unless you want everyone to
 use PGP or S/MIME as a way to hide anonymized traffic
 from non-anonymized traffic.

If email goes away - as I hope and expect it will - we
will need a new store and forward solution to support
anonymity.

A store and forward system is a system without end to
end real time round trips.  Obviously end to end real
time round trips prevent anonymity.

A system built on top of a best effort unreliable
messaging system requires some round tripping, which
does not make anonymity impossible, but does make it
tricky.  Email's architecture is very nice for
supporting anonymity.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Why the poor uptake of encrypted email?

2008-12-18 Thread James A. Donald

Peter Gutmann wrote:
 ... to a statistically irrelevant bunch of geeks.
 Watch Skype deploy a not- terribly-anonymous (to the
 people running the Skype servers) communications
 system.

Actually that is pretty anonymous.  Although I am sure
that Skype would play ball with any bunch of goons that
put forward a plausible justification, or threated to
rip their fingernails off, most government agencies find
it difficult to deal with anyone that they cannot
casually have thrown in jail - dealing with equals is
not part of their mindset.  So if your threat model does
not include the FBI and the CIA, chances are that  the
people who are threatening you will lack the
organization and mindset to get Skype's cooperation.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: CPRNGs are still an issue.

2008-12-11 Thread James A. Donald

Jack Lloyd wrote:
 I think the situation is even worse outside of the
 major projects (the OS kernels crypto implementations
 and the main crypto libraries). I think outside of
 those, nobody is even really looking. For instance -

 This afternoon I took a look at a C++ library called
 JUCE which offers (among a pile of other things) RSA
 and Blowfish. However it turns out that all of the RSA
 keys are generated with an LCRNG (lrand48, basically)
 seeded with the time in milliseconds.
 
http://www.randombit.net/bitbashing/security/juce_rng_vulnerability.html


If one uses a higher resolution counter - sub
microsecond - and times multiple disk accesses, one gets
true physical randomness, since disk access times are
effected by turbulence, which is physically true
random.

In Crypto Kong I added entropy at various times during
program initialization from the 64 bit performance
counter.  Unfortunately the 64 bit performance counter
is not guaranteed to be present, so I also obtained
entropy from a wide variety of other sources - including
the dreaded millisecond counter that has caused so many
security holes.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: Why the poor uptake of encrypted email? [Was: Re: Secrets and cell phones.]

2008-12-11 Thread James A. Donald

--
  We discovered, however, that most people do not want
  to manage their own secrets 

StealthMonger wrote:
 This may help to explain the poor uptake of encrypted
 email.

There is very good uptake of skype and ssh, because
those impose no or very little additional cost on the
end user. Secret management is almost furtively sneaked
in on the back of other tasks.

 It would be useful to know exactly what has been
 discovered.  Can you provide references?

It is informal knowledge.

A field has references when it is a science, or
attempting to become a science, or pretending to become
a science.  Security is not yet even an art.

Cryptography is an art that dubiously pretends to
science, but the weak point of course is interaction of
humans with the cryptography, in which area we have not
even the pretense of art.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


e-gold and e-go1d

2008-11-29 Thread James A. Donald

To implement Zooko's triangle, one has to detect names
that may look alike, for example e-gold and e-go1d

This is a lot of code.  Has someone already written such
a collision detector that I could swipe?

The algorithm is to map all lookalike glyphs to
canonical glyphs - thus l and 1 are mapped to l, O and 0
are mapped to O, lower case o and the Greek omicron are
mapped to lower case o, and so on and so forth.  For
each pair of strings, one then does a character by
character diff, and pairs with suspiciously short diffs
might be confused by end users.

The program then asks the user for a qualification to
distinguish one or both of the names, default being as
first and second, or for the user to deprecate one of
the entities as scam or spam, or for the user to say he
does not care if new entries have the same or similar
name as this particular existing entry.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-18 Thread James A. Donald

Ray Dillinger wrote:
 Okay I'm going to summarize this protocol as I
 understand it.

 I'm filling in some operational details that aren't in
 the paper by supplementing what you wrote with what my
 own design sense tells me are critical missing bits
 or obvious methodologies for use.

There are a number of significantly different ways this
could be implemented.  I have been working on my own
version based on Patricia hash trees, (not yet ready to
post, will post in a week or so) with the consensus
generation being a generalization of file sharing using
Merkle hash trees. Patricia hash trees where the high
order part of the Patricia key represents the high order
part of the time can be used to share data that evolves
in time.  The algorithm, if implemented by honest
correctly functioning peers, regularly generates
consensus hashes of the recent past - thereby addressing
the problem I have been complaining about - that we have
a mechanism to protect against consensus distortion by
dishonest or malfunctioning peers, which is useless
absent a definition of consensus generation by honest
and correctly functioning peers.

 First, people spend computer power creating a pool of
 coins to use as money.  Each coin is a proof-of-work
 meeting whatever criteria were in effect for money at
 the time it was created.  The time of creation (and
 therefore the criteria) is checkable later because
 people can see the emergence of this particular coin
 in the transaction chain and track it through all its
 consensus view spends.  (more later on coin creation
 tied to adding a link).

 When a coin is spent, the buyer and seller digitally
 sign a (blinded) transaction record, and broadcast it
 to a bunch of nodes whose purpose is keeping track of
 consensus regarding coin ownership.

I don't think your blinding works.

If there is a public record of who owns what coin, we
have to generate a  public diff on changes in that
record, so the record will show that a coin belonged to
X, and soon thereafter belonged to Y.  I don't think
blinding can be made to work.  We can blind the
transaction details easily enough, by only making hashes
of the details public, (X paid Y for
49vR7xmwYcKXt9zwPJ943h9bHKC2pG68m) but that X paid Y is
going to be fairly obvious.

If when Joe spends a coin to me, then I have to have the
ability to ask Does Joe rightfully own this coin, then
it is difficult to see how this can be implemented in a
distributed protocol without giving people the ability
to trawl through data detecting that Joe paid me.

To maintain a consensus on who owns what coins, who owns
what coins has to be public.

We can build a privacy layer on top of this - account
money and chaumian money based on bitgold coins, much as
the pre 1915 US banking system layered account money and
bank notes on top of gold coins, and indeed we have to
build a layer on top to bring the transaction cost down
to the level that supports agents performing micro
transactions, as needed for bandwidth control, file
sharing, and charging non white listed people to send us
communications.

So the entities on the public record are entities
functioning like pre 1915 banks - let us call them
binks, for post 1934 banks no longer function like that.

 But if they recieve a _longer_ chain while working,
 they immediately check all the transactions in the new
 links to make sure it contains no double spends and
 that the work factors of all new links are
 appropriate.

I am troubled that this involves frequent
retransmissions of data that is already mostly known.
Consensus and widely distributed beliefs about bitgold
ownership already involves significant cost.  Further,
each transmission of data is subject to data loss, which
can result in thrashing, with the risk that the
generation of consensus may slow below the rate of new
transactions.  We already have problems getting the cost
down to levels that support micro transactions by
software agents, which is the big unserved market -
bandwidth control, file sharing, and charging non white
listed people to send us communications.

To work as useful project, has to be as efficient as it
can be - hence my plan to use a Patricia hash tree
because it identifies and locate small discrepancies
between peers that are mostly in agreement already,
without them needing to transmit their complete data.

We also want to avoid very long hash chains that have to
be frequently checked in order to validate things.  Any
time a hash chain can potentially become enormously long
over time, we need to ensure that no one ever has to
rewalk the full length.  Chains that need to be
re-walked can only be permitted to grow as the log of
the total number of transactions - if they grow as the
log of the transactions in any one time period plus the
total number of time periods, we have a problem.

 Biggest Technical Problem:

 Is there a mechanism to make sure that the chain
 does not consist solely of links added by just the 3
 or 4 fastest 

Re: Bitcoin P2P e-cash paper

2008-11-18 Thread James A. Donald

Nicolas Williams wrote:
 How do identities help?  It's supposed to be anonymous
 cash, right?

Actually no.  It is however supposed to be pseudonymous,
so dinging someone's reputation still does not help
much.

 And say you identify a double spender after the fact,
 then what?  Perhaps you're looking at a disposable ID.
 Or perhaps you can't chase them down.

 Double spend detection needs to be real-time or near
 real-time.

Near real time means we have to use UDP or equivalent,
rather than TCP or equivalent, and we have to establish
an approximate consensus, not necessarily the final
consensus, not necessarily exact agreement, but close to
it, in a reasonably small number of round trips.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-17 Thread James A. Donald

Satoshi Nakamoto wrote:
 Fortunately, it's only necessary to keep a
 pending-transaction pool for the current best branch.

This requires that we know, that is to say an honest
well behaved peer whose communications and data storage
is working well knows, what the current best branch is -
but of course, the problem is that we are trying to
discover, trying to converge upon, a best branch, which
is not easy at the best of times, and becomes harder
when another peer is lying about its connectivity and
capabilities, and yet another peer has just had a major
disk drive failure obfuscated by a software crash, and
the international fibers connecting yet a third peer
have been attacked by terrorists.

  When a new block arrives for the best branch,
  ConnectBlock removes the block's transactions from
  the pending-tx pool.  If a different branch becomes
  longer

Which presupposes the branches exist, that they are
fully specified and complete.  If they exist as complete
works, rather than works in progress, then the problem
is already solved, for the problem is making progress.

 Broadcasts will probably be almost completely
 reliable.

There is a trade off between timeliness and reliability.
One can make a broadcast arbitrarily reliable if time is
of no consequence.  However, when one is talking of
distributed data, time is always of consequence, because
it is all about synchronization (that peers need to have
corresponding views at corresponding times) so when one
does distributed data processing, broadcasts are always
highly unreliable Attempts to ensure that each
message arrives at least once result in increased timing
variation. Thus one has to make a protocol that is
either UDP or somewhat UDP like, in that messages are
small, failure of messages to arrive is common, messages
can arrive in different order to the order in which they
were sent, and the same message may arrive multiple
times.  Either we have UDP, or we need to accommodate
the same problems as UDP has on top of TCP connections.

Rather than assuming that each message arrives at least
once, we have to make a mechanism such that the
information arrives even though conveyed by messages
that frequently fail to arrive.

 TCP transmissions are rarely ever dropped these days

People always load connections near maximum.  When a
connection is near maximum, TCP connections suffer
frequent unreasonably long delays, and connections
simply fail a lot - your favorite web cartoon somehow
shows it is loading forever, and you try again, or it
comes up with a little x in place of a picture, and you
try again

Further very long connections - for example ftp
downloads of huge files,  seldom complete. If you try to
ftp a movie, you are unlikely to get anywhere unless
both client and server have a resume mechanism so that
they can talk about partially downloaded files.

UDP connections, for example Skype video calls, also
suffer frequent picture freezes, loss of quality, and so
forth, and have to have mechanisms to keep going
regardless.

 It's very attractive to the libertarian viewpoint if
 we can explain it properly.  I'm better with code than
 with words though.

No, it is very attractive to the libertarian if we can
design a mechanism that will scale to the point of
providing the benefits of rapidly irreversible payment,
immune to political interference, over the internet,
to very large numbers of people. You have an outline
and proposal for such a design, which is a big step
forward, but the devil is in the little details.

I really should provide a fleshed out version of your
proposal, rather than nagging you to fill out the blind
spots.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-13 Thread James A. Donald

Satoshi Nakamoto wrote:
 When there are multiple double-spent versions of the
 same transaction, one and only one will become valid.

That is not the question I am asking.

It is not trust that worries me, it is how it is
possible to have a  a globally shared view even if
everyone is well behaved.

The process for arriving at a globally shared view of
who owns what bitgold coins is insufficiently specified.
Once specified, then we can start considering whether
everyone has incentives to behave correctly.

It is not sufficient that everyone knows X.  We also
need everyone to know that everyone knows X, and that
everyone knows that everyone knows that everyone knows X
- which, as in the Byzantine Generals problem, is the
classic hard problem of distributed data processing.

This problem becomes harder when X is quite possibly a
very large amount of data - agreement on who was the
owner of every bitgold coin at such and such a time.

And then on top of that we need everyone to have a
motive to behave in such a fashion that agreement
arises.  I cannot see that they have motive when I do
not know the behavior to be motivated.

You keep repeating your analysis of the system under
attack.  We cannot say how the system will behave under
attack until we know how the system is supposed to
behave when not under attack.

If there are a lot of transactions, it is hard to
efficiently discover the discrepancies between one
node's view and another node's view, and because new
transactions are always arriving, no two nodes will ever
have the same view, even if all nodes are honest, and
all reported transactions are correct and true single
spends.

We should be able to accomplish a system where two nodes
are likely to come to agreement as to who owned what
bitgold coins at some very recent past time, but it is
not simple to do so.

If one node constructs a hash that represents its
knowledge of who owned what bitgold coins at a
particular time, and another node wants to check that
hash, it is not simple to do it in such a way that
agreement is likely, and disagreement between honest
well behaved nodes is efficiently detected and
efficiently resolved.

And if we had a specification of how agreement is
generated, it is not obvious why the second node has
incentive to check that hash.

The system has to work in such a way that nodes can
easily and cheaply change their opinion about recent
transactions, so as to reach consensus, but in order to
provide finality and irreversibility, once consensus has
been reached, and then new stuff has be piled on top of
old consensus, in particular new bitgold has been piled
on top of old consensus, it then becomes extremely
difficult to go back and change what was decided.

Saying that is how it works, does not give us a method
to make it work that way.

 The receiver of a payment must wait an hour or so
 before believing that it's valid.  The network will
 resolve any possible double-spend races by then.

You keep discussing attacks.  I find it hard to think
about response to attack when it is not clear to me what
normal behavior is in the case of good conduct by each
and every party.

Distributed databases are *hard* even when all the
databases perfectly follow the will of a single owner.
Messages get lost, links drop, syncrhonization delays
become abnormal, and entire machines go up in flames,
and the network as a whole has to take all this in its
stride.

Figuring out how to do this is hard, even in the
complete absence of attacks.  Then when we have figured
out how to handle all this, then come attacks.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-10 Thread James A. Donald

--
 James A. Donald wrote:
 OK, suppose one node incorporates a bunch of
 transactions in its proof of work, all of them honest
 legitimate single spends and another node
 incorporates a different bunch of transactions in its
 proof of work, all of them equally honest legitimate
 single spends, and both proofs are generated at about
 the same time.

 What happens then?

Satoshi Nakamoto wrote:
 They both broadcast their blocks.  All nodes receive
 them and keep both, but only work on the one they
 received first.  We'll suppose exactly half received
 one first, half the other.

 In a short time, all the transactions will finish
 propagating so that everyone has the full set.  The
 nodes working on each side will be trying to add the
 transactions that are missing from their side.  When
 the next proof-of-work is found, whichever previous
 block that node was working on, that branch becomes
 longer and the tie is broken.  Whichever side it is,
 the new block will contain the other half of the
 transactions, so in either case, the branch will
 contain all transactions.  Even in the unlikely event
 that a split happened twice in a row, both sides of
 the second split would contain the full set of
 transactions anyway.

 It's not a problem if transactions have to wait one or
 a few extra cycles to get into a block.

So what happened to the coin that lost the race?

On the one hand, we want people who make coins to be
motivated to keep and record all transactions, and
obtain an up to date record of all transactions in a
timely manner.  On the other hand, it is a bit harsh if
the guy who came second is likely to lose his coin.

Further, your description of events implies restrictions
on timing and coin generation - that the entire network
generates coins slowly compared to the time required for
news of a new coin to flood the network, otherwise the
chains diverge more and more, and no one ever knows
which chain is the winner.

You need to make these restrictions explicit, for
network flood time may well be quite slow.

Which implies that the new coin rate is slower.

We want spenders to have certainty that their
transaction is valid at the time it takes a spend to
flood the network, not at the time it takes for branch
races to be resolved.

At any given time, for example at 1 040 689 138 seconds
we can look back at the past and say:

At 1 040 688 737 seconds, node 5 was *it*, and
he incorporated all the coins he had discovered
into the chain, and all the new transactions he
knew about on top of the previous link

At 1 040 688 792 seconds, node 2 was *it*, and
he incorporated all the coins he had discovered
into the chain, and all the new transactions he
knew about into the chain on top of node 5's
link.

At 1 040 688 745 seconds, node 7 was *it*, and
he incorporated all the coins he had discovered
into the chain, and all the new transactions he
knew about into the chain on top of node 2's
link.

But no one can know who is *it* right now

So how does one know when to reveal one's coins?  One
solution is that one does not.  One incorporates a hash
of the coin secret whenever one thinks one might be
*it*, and after that hash is securely in the chain,
after one knows that one was *it* at the time, one can
then safely spend the coin that one has found, revealing
the secret.

This solution takes care of the coin revelation problem,
but does not solve the spend recording problem.  If one
node is ignoring all spends that it does not care about,
it suffers no adverse consequences.  We need a protocol
in which your prospects of becoming *it* also depend on
being seen by other nodes as having a reasonably up to
date and complete list of spends - which this protocol
is not, and your protocol is not either.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-09 Thread James A. Donald

--
Satoshi Nakamoto wrote:
 The proof-of-work chain is the solution to the
 synchronisation problem, and to knowing what the
 globally shared view is without having to trust
 anyone.

 A transaction will quickly propagate throughout the
 network, so if two versions of the same transaction
 were reported at close to the same time, the one with
 the head start would have a big advantage in reaching
 many more nodes first.  Nodes will only accept the
 first one they see, refusing the second one to arrive,
 so the earlier transaction would have many more nodes
 working on incorporating it into the next
 proof-of-work.  In effect, each node votes for its
 viewpoint of which transaction it saw first by
 including it in its proof-of-work effort.

OK, suppose one node incorporates a bunch of
transactions in its proof of work, all of them honest
legitimate single spends and another node incorporates a
slightly different bunch of transactions in its proof of
work, all of them equally honest legitimate single
spends, and both proofs are generated at about the same
time.

What happens then?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


voting by m of n digital signature?

2008-11-09 Thread James A. Donald

Is there a way of constructing a digital signature so
that the signature proves that at least m possessors of
secret keys corresponding to n public keys signed, for n
a dozen or less, without revealing how many more than m,
or which ones signed?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-09 Thread James A. Donald

Satoshi Nakamoto wrote:
 Increasing hardware speed is handled: To compensate
 for increasing hardware speed and varying interest in
 running nodes over time, the proof-of-work difficulty
 is determined by a moving average targeting an average
 number of blocks per hour. If they're generated too
 fast, the difficulty increases.

This does not work - your proposal involves
complications I do not think you have thought through.

Furthermore, it cannot be made to work, as in the
proposed system the work of tracking who owns what coins
is paid for by seigniorage, which requires inflation.

This is not an intolerable flaw - predictable inflation
is less objectionable than inflation that gets jiggered
around from time to time to transfer wealth from one
voting block to another.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-09 Thread James A. Donald

Satoshi Nakamoto wrote:
 The bandwidth might not be as prohibitive as you
 think.  A typical transaction would be about 400 bytes
 (ECC is nicely compact).  Each transaction has to be
 broadcast twice, so lets say 1KB per transaction.
 Visa processed 37 billion transactions in FY2008, or
 an average of 100 million transactions per day.  That
 many transactions would take 100GB of bandwidth, or
 the size of 12 DVD or 2 HD quality movies, or about
 $18 worth of bandwidth at current prices.

The trouble is, you are comparing with the Bankcard
network.

But a new currency cannot compete directly with an old,
because network effects favor the old.

You have to go where Bankcard does not go.

At present, file sharing works by barter for bits. This,
however requires the double coincidence of wants. People
only upload files they are downloading, and once the
download is complete, stop seeding. So only active
files, files that quite a lot of people want at the same
time, are available.

File sharing requires extremely cheap transactions,
several transactions per second per client, day in and
day out, with monthly transaction costs being very small
per client, so to support file sharing on bitcoins, we
will need a layer of account money on top of the
bitcoins, supporting transactions of a hundred
thousandth the size of the smallest coin, and to support
anonymity, chaumian money on top of the account money.

Let us call a bitcoin bank a bink.  The bitcoins stand
in the same relation to account money as gold stood in
the days of the gold standard.  The binks, not trusting
each other to be liquid when liquidity is most needed,
settle out any net discrepancies with each other by
moving bit coins around once every hundred thousand
seconds or so, so bitcoins do not change owners that
often,   Most transactions cancel out at the account
level.  The binks demand bitcoins of each other only
because they don't want to hold account money for too
long. So a relatively small amount of bitcoins
infrequently transacted can support a somewhat larger
amount of account money frequently transacted.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-05 Thread James A. Donald

James A. Donald:
  To detect and reject a double spending event in a
  timely manner, one must have most past transactions
  of the coins in the transaction, which, naively
  implemented, requires each peer to have most past
  transactions, or most past transactions that
  occurred recently. If hundreds of millions of people
  are doing transactions, that is a lot of bandwidth -
  each must know all, or a substantial part thereof.

Satoshi Nakamoto wrote:
 Long before the network gets anywhere near as large as
 that, it would be Safe for users to use Simplified
 Payment Verification (section 8) to check for double
 spending, which only requires having the chain of
 block headers,

If I understand Simplified Payment Verification
correctly:

New coin issuers need to store all coins and all recent
coin transfers.

There are many new coin issuers, as many as want to be
issuers, but far more coin users.

Ordinary entities merely transfer coins.  To see if a
coin transfer is OK, they report it to one or more new
coin issuers and see if the new coin issuer accepts it.
New coin issuers check transfers of old coins so that
their new coins have valid form, and they report the
outcome of this check so that people will report their
transfers to the new coin issuer.

If someone double spends a coin, and one expenditure is
reported to one new coin issuer, and the other
simultaneously reported to another new coin issuer, then
both issuers to swifly agree on a unique sequence order
of payments.  This, however, is a non trivial problem of
a massively distributed massive database, a notoriously
tricky problem, for which there are at present no peer
to peer solutions.  Obiously it is a solvable problem,
people solve it all the time, but not an easy problem.
People fail to solve it rather more frequently.

 But let us suppose that the coin issue network is
dominated by a small number of issuers as seems likely.

If a small number of entities are issuing new coins,
this is more resistant to state attack that with a
single issuer, but the government regularly attacks
financial networks, with the financial collapse ensuing
from the most recent attack still under way as I write
this.

Government sponsored enterprises enter the business, in
due course bad behavior is made mandatory, and the evil
financial network is bigger than the honest financial
network, with the result that even though everyone knows
what is happening, people continue to use the paper
issued by the evil financial network, because of network
effects - the big, main issuers, are the issuers you use
if you want to do business.

Then knowledgeable people complain that the evil
financial network is heading for disaster, that the
government sponsored enterprises are about to cause a
collapse of the total financial system, as Wallison
and Alan Greenspan complained in 2005, the government
debates shrinking the evil government sponsored
enterprises, as with S. 190 [109th]: Federal Housing
Enterprise Regulatory Reform Act of 2005 but they find
easy money too seductive, and S. 190 goes down in flames
before a horde of political activists chanting that easy
money is sound, and opposing it is racist, nazi,
ignorant, and generally hateful, the recent S. 190
debate on limiting portfolios (bond issue supporting dud
mortgages) by government sponsored enterprises being a
perfect reprise of the debates on limiting the issue of
new assignats in the 1790s.

The big and easy government attacks on money target a
single central money issuer, as with the first of the
modern political attacks, the French Assignat of 1792,
but in the late nineteenth century political attacks on
financial networks began, as for example the Federal
reserve act of 1913, the goal always being to wind up
the network into a single too big to fail entity, and
they have been getting progressively bigger, more
serious, and more disastrous, as with the most recent
one.  Each attack is hugely successful, and after the
cataclysm that the attack causes the attackers are
hailed as saviors of the poor, the oppressed, and the
nation generally, and the blame for the the bad
consequences is dumped elsewhere, usually on Jews,
greedy bankers, speculators, etc, because such attacks
are difficult for ordinary people understand.  I have
trouble understanding your proposal - ordinary users
will be easily bamboozled by a government sponsored
security update.  Further, when the crisis hits, to
disagree with the line, to doubt that the regulators are
right, and the problem is the evil speculators, becomes
political suicide, as it did in America in 2007,
sometimes physical suicide, as in Weimar Germany.

Still, it is better, and more resistant to attack by
government sponsored enterprises, than anything I have
seen so far.

 Visa processed 37 billion transactions in FY2008, or
 an average of 100 million transactions per day.  That
 many transactions would take 100GB of bandwidth, or
 the size of 12 DVD or 2 HD quality movies

Secrets and cell phones.

2008-11-05 Thread James A. Donald
A sim card contains a shared symmetric secret that is known to the 
network operator and to rather too many people on the operator's staff, 
and which could be easily discovered by the phone holder - but which is 
very secure against everyone else.


This means that cell phones provide authentication that is secure 
against everyone except the network operator, which close to what we 
need for financial transactions.  The network operator maps this 
narrowly shared secret to a phone number. The phone number, which once 
upon a time directly controlled equipment that makes connections, is now 
a database key to the secret.


There are now send-money-to-and-from-phone-number systems in Canada 
http://digitaldebateblogs.typepad.com/digital_identity/2008/10/sos-sms.html, 
in South Africa, and in various third world countries with collapsed 
banking systems.


At present, each of these systems sits in its own narrow little silo - 
you cannot send money from a Canadian phone number directly to a South 
Africa phone number, and, despite being considerably more secure than 
computer sign on to your bank, are limited to small amounts of money, 
probably to appease the banking cartel and the money laundering controls.


Skype originally planned to introduce such a system, which would have 
been a world wide system, skype id to skype id, but backed off, perhaps 
because of possible regulatory reprisals, perhaps because computers are 
insufficiently secure.  If you click on the spot in the UI that would 
have connected you to Skype's offering, you instead get an ad for paypal.


Of course, the old cypherpunk dream is a system with end to end 
encryption, with individuals having the choice of holding their own 
secrets, rather than these secrets being managed by some not very 
trusted authority, and with these secrets enabling transfer of money, in 
the form of a yurls representing a sum of money, from one yurl 
representing an id, to another yurl reprsenting an id.


We discovered, however, that most people do not want to manage their own 
secrets, and that today's operating systems are not a safe place on 
which to store valuable secrets.


We know in principle how to make operating systems safe enough 
http://jim.com/security/safe_operating_system.html, but for the moment 
readily transferable money is coming in through systems with centralized 
access to keys, and there is no other way to do it.


If the mapping of phone numbers to true names is sufficiently weak, (few 
of my phone numbers are mapped to my true name) centralized access to 
symmetric keys is not too bad.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Bitcoin P2P e-cash paper

2008-11-02 Thread James A. Donald

Satoshi Nakamoto wrote:

I've been working on a new electronic cash system that's fully
peer-to-peer, with no trusted third party.

The paper is available at:
http://www.bitcoin.org/bitcoin.pdf


We very, very much need such a system, but the way I understand your 
proposal, it does not seem to scale to the required size.


For transferable proof of work tokens to have value, they must have 
monetary value.  To have monetary value, they must be transferred within 
a very large network - for example a file trading network akin to 
bittorrent.


To detect and reject a double spending event in a timely manner, one 
must have most past transactions of the coins in the transaction, which, 
 naively implemented, requires each peer to have most past 
transactions, or most past transactions that occurred recently. If 
hundreds of millions of people are doing transactions, that is a lot of 
bandwidth - each must know all, or a substantial part thereof.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Cloning resistance in bluetooth

2008-10-27 Thread James A. Donald
Suppose one has a system that automatically signs you on to anything if 
your cell phone is within bluetooth range of your computer, and 
automatically signs you off out of everything, and puts up a screen 
saver that will not go away, when your cell phone is out of range of 
your computer.


What is the basis for cloning resistance of a cell phone with blue tooth?

NFC provides physical authenticity - privacy on the model of whispering 
one's ear, and authentication by touching.  Is there any mechanism 
intended for mapping that to keys, so that when two NFC devices meet, 
they can give each other petnames, and subsequently recognize public 
keys by petname?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


  1   2   3   4   >