Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread ianG

Hi Peter,

On 30/09/13 23:31 PM, Peter Fairbrother wrote:

On 26/09/13 07:52, ianG wrote:

On 26/09/13 02:24 AM, Peter Fairbrother wrote:

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a good enough security delivered to more people.


Given that mostly security works (or it should), what's really important
is where that security fails - and good enough security can drive out
excellent security.



Indeed it can.  So how do we differentiate?  Here are two oft-forgotten 
problems.


Firstly, when systems fail, typically it is the system around the crypto 
that fails, not the crypto itself.  This tells us that (a) the job of 
the crypto is to help the rest of the system to not fail, and (b) near 
enough is often good enough, because the metric of importance is to push 
all likely attacks elsewhere (into the rest of the system).


An alternative treatment is Adi Shamir's 3 laws of security:

http://financialcryptography.com/mt/archives/000147.html

Secondly, when talking about security options, we have to show where the 
security fails.  With history, with evidence -- so we can inform our 
speculations with facts.  If we don't do that, then our speculations 
become received wisdom, and we end up fielding systems that not only are 
making things worse, but are also blocking superior systems from emerging.




We can easily have excellent security in TLS (mk 2?) - the crypto part
of TLS can be unbreakable, code to follow (hah!) - but 1024-bit DHE
isn't say unbreakable for 10 years, far less for a lifetime.



OK, so TLS.  Let's see the failures in TLS?  SSL was running export 
grade for lots and lots of years, and those numbers were chosen to be 
crackable.  Let's see a list of damages, breaches, losses?


Guess what?  Practically none!  There is no recorded history of breaches 
in TLS crypto (and I've been asking for a decade, others longer).


So, either there are NO FAILURES from export grade or other weaker 
systems, *or* everyone is covering them up.  Because of some logic (like 
how much traffic and use), I'm going to plumb for NO FAILURES as a 
reasonable best guess, and hope that someone can prove me wrong.


Therefore, I conclude that perfect security is a crock, and there plenty 
of slack to open up and ease up.  If we can find a valid reason in the 
whole system (beyond TLS) to open up or ease up, then we should do it.




We are only talking about security against an NSA-level opponent here.
Is that significant?



It is a significant question.  Who are we protecting?  If we are talking 
about online banking, and credit cards, and the like, we are *not* 
protecting against the NSA.


(Coz they already breached all the banks, ages ago, and they get it all 
in real time.)


On the other hand, if we are talking about CAs or privacy system 
operators or jihadist websites, then we are concerned about NSA-level 
opponents.


Either way, we need to make a decision.  Otherwise all the other 
pronouncements are futile.




Eg, Tor isn't robust against NSA-level opponents. Is OTR?



All good questions.  What you have to do is decide your threat model, 
and protect against that.  And not flip across to some hypothetical 
received wisdom like MITM is the devil without a clear knowledge about 
why you care about that particular devil.




We're talking multiple orders of magnitude here.  The math that counts
is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.



I'm sorry, I don't deal in omniscience.Typically we as suppliers of
some security product have only the faintest idea what our users are up
to.  (Some consider this a good thing, it's a privacy quirk.)



No, and you don't know how important your opponent thinks the
information is either, and therefore what resources he might be willing
or able to spend to get access to it



Indeed, so many unknowables.  Which is why a risk management approach is 
to decide what you are protecting against and more importantly what you 
are not protecting against.


That results in, sharing the responsibility with another layer, another 
person.  E.g., if you're not in the sharing business, you're not in the 
security business.




- but we can make some crypto which
(we think) is unbreakable.



In that lies the trap.  Because we can make a block cipher that is 
unbreakable, we *think* we can make a system that is unbreakable.  No 
such applies.  Because we think we can make a system that is 
unbreakable, we talk like we can protect the user unbreakably.  A joke. 
 

Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread Paul Crowley
On 30 September 2013 23:35, John Kelsey crypto@gmail.com wrote:

 If there is a weak curve class of greater than about 2^{80} that NSA knew
 about 15 years ago and were sure nobody were ever going to find that weak
 curve class and exploit it to break classified communications protected by
 it, then they could have generated 2^{80} or so seeds to hit that weak
 curve class.


If the NSA's attack involves generating some sort of collision between a
curve and something else over a 160-bit space, they wouldn't have to be
worried that someone else would find and attack that weak curve class
with less than 2^160 work.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread Manuel Pégourié-Gonnard
Hi,

On 01/10/2013 19:39, Peter Fairbrother wrote:
 Also, the method by which the generators (and thus the actual groups in 
 use, not the curves) were chosen is unclear.
 
If we're talking about the NIST curves over prime fields, they all have cofactor
1, so the actual group used is E(F_p), the (cyclic) group of all rational points
over F_p: there is no choice to be made here. Now, for the curves over binary
fields, the cofactor is 2 or 4, which again means the curve only has one
subgroup of large prime order. No room for choice either.

On another front, the choice of the generator in a particular group is of no
importance to the security of the discrete log problem. For example, assume you
know how to efficiently compute discrete logs with respect to some generator
G_1, and let me explain how you can use that to efficiently compute discrete
logs with respect to another base G_2.

First, you compute the n_21 such that G_2 = n_21 G_1, that is the discrete log
of G_2 in base G_1. Then you compute n_12, the modular inverse of n_21 modulo r,
the order of the group (which is known), so that G_1 = n_12 G_2. Now given a
random point P of which you want the log with base G_2, you first compute l_1,
its log in base G_1, that is P = l_1 G_1 = l_1 n_12 G_2, and tadam, l_1 n_12
(modulo r if you want) is the desired log in base G_2.

(The last two paragraphs actually hold for any cyclic group, though I wrote them
additively with elliptic curves in mind.)

So, really the only relevant unexplained parameters are the seeds to the
pseudo-random algorithm.

Manuel.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread John Kelsey
On Oct 2, 2013, at 9:54 AM, Paul Crowley p...@ciphergoth.org wrote:

 On 30 September 2013 23:35, John Kelsey crypto@gmail.com wrote:
 If there is a weak curve class of greater than about 2^{80} that NSA knew 
 about 15 years ago and were sure nobody were ever going to find that weak 
 curve class and exploit it to break classified communications protected by 
 it, then they could have generated 2^{80} or so seeds to hit that weak curve 
 class.
 
 If the NSA's attack involves generating some sort of collision between a 
 curve and something else over a 160-bit space, they wouldn't have to be 
 worried that someone else would find and attack that weak curve class with 
 less than 2^160 work.

I don't know enough about elliptic curves to have an intelligent opinion on 
whether this is possible.  Has anyone worked out a way to do this?  

The big question is how much work would have had to be done.  If you're talking 
about a birthday collision on the curve parameters, is that a collision on a 
160 bit value, or on a 224 or 256 or 384 or 512 bit value?  I can believe NSA 
doing a 2^{80} search 15 years ago, but I think it would have had to be a top 
priority.  There is no way they were doing 2^{112} searches 15 years ago, as 
far as I can see.

--John___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-10-02 Thread Kristian Gjøsteen
2. okt. 2013 kl. 16:59 skrev John Kelsey crypto@gmail.com:

 On Oct 2, 2013, at 9:54 AM, Paul Crowley p...@ciphergoth.org wrote:
 
 On 30 September 2013 23:35, John Kelsey crypto@gmail.com wrote:
 If there is a weak curve class of greater than about 2^{80} that NSA knew 
 about 15 years ago and were sure nobody were ever going to find that weak 
 curve class and exploit it to break classified communications protected by 
 it, then they could have generated 2^{80} or so seeds to hit that weak curve 
 class.
 
 If the NSA's attack involves generating some sort of collision between a 
 curve and something else over a 160-bit space, they wouldn't have to be 
 worried that someone else would find and attack that weak curve class with 
 less than 2^160 work.
 
 I don't know enough about elliptic curves to have an intelligent opinion on 
 whether this is possible.  Has anyone worked out a way to do this?  

Edlyn Teske [1] describes a way in which you select one curve and then find a 
second curve together with an isogeny (essentially a group homomorphism) to the 
first curve. The first curve is susceptible to Weil descent attacks, making it 
feasible to compute d.log.s on the curve. The other curve is not susceptible to 
Weil descent attacks.

You publish the latter curve, and keep the first curve and a description of the 
isogeny suitable for computation to yourself. When you want to compute a d.log. 
on the public curve, you use the isogeny to move it to your secret curve and 
then use Weil descent to find the d.log.

I suppose you could generate lots of such pairs of curves, and at the same time 
generate lots of curves from seeds. After a large number of generations, you 
find a collision. You now have your trapdoor curve. However, the amount of work 
should be about the square root of the field size.

Do we have something here?

(a) Weil descent (mostly) works over curves over composite-degree extension 
fields.

(b) Cryptographers worried about curves over (composite-degree) extension 
fields long before Weil descent attacks were discovered. (Some people like them 
because they speed things up slightly.)

(c) NIST's extension fields all have prime degree, which isn't optimal for Weil 
descent.

(d) NIST's fields are all too big, if we assume that NSA couldn't do 2^112 
computations in 1999.

(e) This doesn't work for prime fields.

It seems that if there is a trapdoor built into NIST's (extension field) 
curves, NSA in 1999 was way ahead of where the open community is today in 
theory, and had computing power that we generally don't think they have today.

We have evidence of NSA doing bad things. This seems unlikely to be it.

[1] Edlyn Teske: An Elliptic Curve Trapdoor System. J. Cryptology 19(1): 
115-133 (2006)

-- 
Kristian Gjøsteen



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-10-01 Thread Ben Laurie
On 30 September 2013 23:24, John Kelsey crypto@gmail.com wrote:

 Maybe you should check your code first?  A couple nist people verified
 that the curves were generated by the described process when the questions
 about the curves first came out.


If you don't quote the message you're replying to, its hard to guess who
should check what code - perhaps you could elaborate?


  Don't trust us, obviously--that's the whole point of the procedure.  But
 check your code, because the process worked right when we checked it.

 --John
 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-10-01 Thread ianG

On 28/09/13 22:06 PM, ianG wrote:

On 27/09/13 18:23 PM, Phillip Hallam-Baker wrote:


Problem with the NSA is that its Jekyll and Hyde. There is the good side
trying to improve security and the dark side trying to break it. Which
side did the push for EC come from?



What's in Suite A?  Will probably illuminate that question...



Just to clarify my original poser -- which *public key methods* are 
suggested in Suite A?


RSA?  EC?  diversified keys?  Something new?

The answer will probably illuminate what the NSA really thinks about EC.

(As well as get us all put in jail for thought-crime.)



iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-10-01 Thread Kristian Gjøsteen
1. okt. 2013 kl. 02:00 skrev James A. Donald jam...@echeque.com:

 On 2013-10-01 08:24, John Kelsey wrote:
 Maybe you should check your code first?  A couple nist people verified that 
 the curves were generated by the described process when the questions about 
 the curves first came out. 
 
 And a non NIST person verified that the curves were not generated by the 
 described process after the scandal broke.

Checking the verification code may be a good idea.

I just checked that the verification process described in Appendix 5 in the 
document RECOMMENDED ELLIPTIC CURVES FOR FEDERAL GOVERNMENT USE, July 1999 
(http://csrc.nist.gov/groups/ST/toolkit/documents/dss/NISTReCur.pdf) accepts 
the NIST prime field curves listed in that document. Trivial python script 
follows.

I am certainly not the first non-US non-government person to check.

There is solid evidence that the US goverment does bad things. This isn't it.

-- 
Kristian Gjøsteen

import hashlib

def string_to_integer(s):
	n = 0
	for byte in s:
		n = n*256 + ord(byte)
	return n

def integer_to_string(n):
	if n == 0:
		return 
	return integer_to_string(n/256) + chr(n%256)

def verify_generation(s, p, l, b):
	assert(len(s) == 160/8)
	v = (l-1)/160
	w = l - 160*v - 1

	h = hashlib.sha1(s).digest()
	hh = integer_to_string(string_to_integer(h) % (2**w))

	z = string_to_integer(s) + 1 # +1 because for loop goes from 0 to v-1
	for i in range(v):
		hh = hh + hashlib.sha1(integer_to_string(z+i)).digest()

	c = string_to_integer(hh)
	if (b*b*c + 27)%p == 0:
		return True
	else:
		return False

curve_data = [
	(P-192 wrong, 6277101735386680763835789423207666416083908700390324961279, 192, 0x3045ae6fc822f64ed579528d38120eae12196d5, 0x64210519e59c80e70fa7e9ab72243049feb8deecc146b9b1),
	(P-192, 6277101735386680763835789423207666416083908700390324961279, 192, 0x3045ae6fc8422f64ed579528d38120eae12196d5, 0x64210519e59c80e70fa7e9ab72243049feb8deecc146b9b1),
	(P-224, 26959946667150639794667015087019630673557916260026308143510066298881, 224, 0xbd71344799d5c7fcdc45b59fa3b9ab8f6a948bc5, 0xb4050a850c04b3abf54132565044b0b7d7bfd8ba270b39432355ffb4),
	(P-256, 115792089210356248762697446949407573530086143415290314195533631308867097853951, 256, 0xc49d360886e704936a6678e1139d26b7819f7e90, 0x5ac635d8aa3a93e7b3ebbd55769886bc651d06b0cc53b0f63bce3c3e27d2604b),
	(P-256 wrong, 115792089210356248762697446949407573530086143415290314195533631308867097853951, 256, 0xc49d360886e704936a6678e1139d26b7819f7e90, 0x7efba1662985be9403cb055c75d4f7e0ce8d84a9c5114abcaf3177680104fa0d),
	(P-384, 39402006196394479212279040100143613805079739270465446667948293404245721771496870329047266088258938001861606973112319, 384, 0xa335926aa319a27a1d00896a6773a4827acdac73, 0xb3312fa7e23ee7e4988e056be3f82d19181d9c6efe8141120314088f5013875ac656398d8a2ed19d2a85c8edd3ec2aef),
	(P-521, 6864797660130609714981900799081393217269435300143305409394463459185543183397656052122559640661454554977296311391480858037121987999716643812574028291115057151, 521, 0xd09e8800291cb85396cc6717393284aaa0da64ba, 0x051953eb9618e1c9a1f929a21a0b68540eea2da725b99b315f3b8b489918ef109e156193951ec7e937b1652c0bd3bb1bf073573df883d2c34f1ef451fd46b503f00) ]

for cd in curve_data:
	(name, p, l, s, b) = cd
	print name, verify_generation(integer_to_string(s), p, l, b)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-10-01 Thread Peter Fairbrother

On 01/10/13 08:49, Kristian Gjøsteen wrote:

1. okt. 2013 kl. 02:00 skrev James A. Donald jam...@echeque.com:


On 2013-10-01 08:24, John Kelsey wrote:

Maybe you should check your code first?  A couple nist people verified that the 
curves were generated by the described process when the questions about the 
curves first came out.


And a non NIST person verified that the curves were not generated by the 
described process after the scandal broke.


Checking the verification code may be a good idea.

I just checked that the verification process described in Appendix 5 in the 
document RECOMMENDED ELLIPTIC CURVES FOR FEDERAL GOVERNMENT USE, July 1999 
(http://csrc.nist.gov/groups/ST/toolkit/documents/dss/NISTReCur.pdf) accepts 
the NIST prime field curves listed in that document. Trivial python script 
follows.

I am certainly not the first non-US non-government person to check.

There is solid evidence that the US goverment does bad things. This isn't it.


Agreed (though did you also check whether the supposed verification 
process actually matches the supposed generation process?).


Also agreed, NSA could not have reverse-engineered the parts of the 
generating process from random source to the curve's b component, ie 
they could not have started with a chosen b component and then generated 
the random source.




However they could easily have cherry-picked a result for b from trying 
several squillion source numbers. There is no real reason not to use 
something like the digits of pi as the source - which they did not do.


Also, the method by which the generators (and thus the actual groups in 
use, not the curves) were chosen is unclear.



Even assuming NSA tried their hardest to undermine the curve selection 
process, there is some doubt as to whether these two actual and easily 
verifiable failings in a supposedly open generation process are enough 
to make the final groups selected useful for NSA's nefarious purposes.


But there is a definite lack of clarity there.


-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread Taral
On Sun, Sep 29, 2013 at 9:15 PM, Viktor Dukhovni
cryptogra...@dukhovni.org wrote:
 On Mon, Sep 30, 2013 at 10:07:14AM +1000, James A. Donald wrote:
 Therefore, everyone should use Curve25519, which we have every
 reason to believe is unbreakable.

 Superceded by the improved Curve1174.

Hardly. Elligator 2 works fine on curve25519.

-- 
Taral tar...@gmail.com
Please let me know if there's any further trouble I can give you.
-- Unknown
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread David Kuehling
 James == James A Donald jam...@echeque.com writes:

 Gregory Maxwell on the Tor-talk list has found that NIST approved
 curves, which is to say NSA approved curves, were not generated by the
 claimed procedure, which is a very strong indication that if you use
 NIST curves in your cryptography, NSA can read your encrypted data.

Just for completeness, I think this is the Mail you're referring to:

https://lists.torproject.org/pipermail/tor-talk/2013-September/029956.html

David
-- 
GnuPG public key: http://dvdkhlng.users.sourceforge.net/dk2.gpg
Fingerprint: B63B 6AF2 4EEB F033 46F7  7F1D 935E 6F08 E457 205F


pgprYk0hpT6pe.pgp
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread Peter Fairbrother

On 26/09/13 07:52, ianG wrote:

On 26/09/13 02:24 AM, Peter Fairbrother wrote:

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a good enough security delivered to more people.


Given that mostly security works (or it should), what's really important 
is where that security fails - and good enough security can drive out 
excellent security.


We can easily have excellent security in TLS (mk 2?) - the crypto part 
of TLS can be unbreakable, code to follow (hah!) - but 1024-bit DHE 
isn't say unbreakable for 10 years, far less for a lifetime.



We are only talking about security against an NSA-level opponent here. 
Is that significant?


Eg, Tor isn't robust against NSA-level opponents. Is OTR?


We're talking multiple orders of magnitude here.  The math that counts
is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.



I'm sorry, I don't deal in omniscience.Typically we as suppliers of
some security product have only the faintest idea what our users are up
to.  (Some consider this a good thing, it's a privacy quirk.)



No, and you don't know how important your opponent thinks the 
information is either, and therefore what resources he might be willing 
or able to spend to get access to it - but we can make some crypto which 
(we think) is unbreakable.


No matter who or what resources, unbreakable. You can rely on the math.

And it doesn't usually cost any more than we are willing to pay - heck, 
the price is usually lost in the noise.


Zero crypto (theory) failures.

Ok, real-world systems won't ever meet that standard - but please don't 
hobble them with failure before they start trying.



With that assumption, the various i's you list become some sort of
average


Do you mean I-i's?

Ah, average, Which average might that be? Hmmm, independent 
distributions of two variables - are you going to average them, then 
multiply the averages?


That approximation doesn't actually work very well, mathematically 
speaking - as I'm sure you know.



This is why the security model that is provided is typically
one-size-fits-all, and the most successful products are typically the
ones with zero configuration and the best fit for the widest market.


I totally agree with zero configuration - and best fit - but you are 
missing the main point.


Would 1024-bit DHE give a reasonable expectation of say, ten years 
unbreakable by NSA?


If not, and Manning or Snowden wanted to use TLS, they would likely be 
busted.


Incidentally, would OTR pass that test?



-- Peter Fairbrother

(sorry for the sloppy late reply)

(I'm talking about TLS2, not a BCP - but the BCP is significant)
(how's the noggin? how's Waterlooville?? can I come visit sometime?)
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread John Kelsey
Having read the mail you linked to, it doesn't say the curves weren't generated 
according to the claimed procedure.  Instead, it repeats Dan Bernstein's 
comment that the seed looks random, and that this would have allowed NSA to 
generate lots of curves till they found a bad one.  

it looks to me like there is no new information here, and no evidence of 
wrongdoing that I can see.  If there is a weak curve class of greater than 
about 2^{80} that NSA knew about 15 years ago and were sure nobody were ever 
going to find that weak curve class and exploit it to break classified 
communications protected by it, then they could have generated 2^{80} or so 
seeds to hit that weak curve class.  

What am I missing?  Do you have evidence that the NIST curves are cooked?  
Because the message I saw didn't provide anything like that.  

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread James A. Donald

On 2013-10-01 08:24, John Kelsey wrote:

Maybe you should check your code first?  A couple nist people verified that the 
curves were generated by the described process when the questions about the 
curves first came out.


And a non NIST person verified that the curves were /not/ generated by 
the described process after the scandal broke.


The process that actually generates the curves looks like the end result 
of trying a trillion curves, until you hit one that has desirable 
properties, which desirable properties you are disinclined to tell 
anyone else.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-30 Thread James A. Donald

On 2013-10-01 08:35, John Kelsey wrote:

Having read the mail you linked to, it doesn't say the curves weren't generated 
according to the claimed procedure.  Instead, it repeats Dan Bernstein's 
comment that the seed looks random, and that this would have allowed NSA to 
generate lots of curves till they found a bad one.


The claimed procedure would have prevented the NSA from generating lots 
of curves till they found a bad one - one with weaknesses that the NSA 
knows how to detect, but which other people do not yet know how to detect.


That was the whole point of the claimed procedure.

As with SHA3, the NSA/NIST is deviating from its supposed procedures in 
ways that remove the security properties of those procedures.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-29 Thread Jerry Leichter
On Sep 28, 2013, at 3:06 PM, ianG wrote:
 Problem with the NSA is that its Jekyll and Hyde. There is the good side
 trying to improve security and the dark side trying to break it. Which
 side did the push for EC come from?
 What's in Suite A?  Will probably illuminate that question...
The actual algorithms are classified, and about all that's leaked about them, 
as far as I can determine in a quick search, is the names of some of them, and 
general properties of a subset of those - e.g., according to Wikipedia, BATON 
is a block cipher with a key length of 320 bits (160 of them checksum bits - 
I'd guess that this is an overt way for NSA to control who can use stolen 
equipment, as it will presumably refuse to operate at all with an invalid key). 
 It looks as if much of this kind of information comes from public descriptions 
of equipment sold to the government that implements these algorithms, though a 
bit of the information (in particular, the name BATON and its key and block 
sizes) has made it into published standards via algorithm specifiers.  cryptome 
has a few leaked documents as well - again, one showing BATON mentioned in 
Congressional testimony about Clipper.

Cryptographic challenge:  If you have a sealed, tamper-proof box that 
implements, say, BATON, you can easily have it refuse to work if the key 
presented doesn't checksum correctly.  In fact, you'd likely have it destroy 
itself if presented with too many invalid keys.  NSA has always been really big 
about using such sealed modules for their own algorithms.  (The FIPS specs were 
clearly drafted by people who think in these terms.  If you're looking at them 
while trying to get software certified, many of the provisions look very 
peculiar.  OK, no one expects your software to be potted in epoxy (opaque in 
the ultraviolet - or was it infrared?); but they do expect various kinds of 
isolation that just affect the blocks on a picture of your software's 
implementation; they have no meaningful effect on security, which unlike 
hardware can't enforce any boundaries between the blocks.)

Anyway, this approach obviously depends on the ability of the hardware to 
resist attacks.  Can one design an algorithm which is inherently secure against 
such attacks?  For example, can one design an algorithm that's strong when used 
with valid keys but either outright fails (e.g., produces indexes into 
something like S-boxes that are out of range) or is easily invertible if used 
with invalid keys (e.g., has a key schedule that with invalid keys produces all 
0's after a certain small number of rounds)?  You'd need something akin to 
asymmetric cryptography to prevent anyone from reverse-engineering the checksum 
algorithm from the encryption algorithm, but I know of no fundamental reason 
why that couldn't be done.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-29 Thread Lodewijk andré de la porte
2013/9/29 James A. Donald jam...@echeque.com

 (..) fact, they are not provably random, selected (...)

fixed that for you

It seems obvious that blatant lying about qualities of procedures must have
some malignant intention, yet ignorance is as good an explanation. I don't
think lying the other way would solve anything. It's obviously not
especially secure.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-29 Thread James A. Donald

On 2013-09-30 03:14, Lodewijk andré de la porte wrote:
2013/9/29 James A. Donald jam...@echeque.com 
mailto:jam...@echeque.com


(..) fact, they are not provably random, selected (...)

fixed that for you

It seems obvious that blatant lying about qualities of procedures must 
have some malignant intention, yet ignorance is as good an 
explanation. I don't think lying the other way would solve anything. 
It's obviously not especially secure.



The NIST ec curves are provably non random, and one can prove that NIST 
is lying about them, which is circumstantial but compelling evidence 
that they are backdoored:


   From: Gregory Maxwellgmaxw...@gmail.com  mailto:gmaxw...@gmail.com
   To: This mailing list is for all discussion about theory, design, and 
development of Onion Routing.
tor-t...@lists.torproject.org  mailto:tor-t...@lists.torproject.org
   Subject: Re: [tor-talk] NIST approved crypto in Tor?
   Reply-To:tor-t...@lists.torproject.org  
mailto:tor-t...@lists.torproject.org

   On Sat, Sep 7, 2013 at 4:08 PM, anonymous coward
   anonymous.cow...@posteo.de  mailto:anonymous.cow...@posteo.de  wrote:

   Bruce Schneier recommends **not** to use ECC. It is safe to
   assume he knows what he says.

   I believe Schneier was being careless there. The ECC parameter
   sets commonly used on the internet (the NIST P-xxxr ones) were
   chosen using a published deterministically randomized procedure.
   I think the notion that these parameters could have been
   maliciously selected is a remarkable claim which demands
   remarkable evidence.

   On Sat, Sep 7, 2013 at 8:09 PM, Gregory Maxwellgmaxw...@gmail.com  
mailto:gmaxw...@gmail.com  wrote:

   Okay, I need to eat my words here.

   I went to review the deterministic procedure because I wanted to see
   if I could repoduce the SECP256k1 curve we use in Bitcoin. They
   don’t give a procedure for the Koblitz curves, but they have far
   less design freedom than the non-koblitz so I thought perhaps I’d
   stumble into it with the “most obvious” procedure.

   The deterministic procedure basically computes SHA1 on some seed and
   uses it to assign the parameters then checks the curve order, etc..
   wash rinse repeat.

   Then I looked at the random seed values for the P-xxxr curves. For
   example, P-256r’s seed is c49d360886e704936a6678e1139d26b7819f7e90.

   _No_ justification is given for that value. The stated purpose of
   the “veritably random” procedure “ensures that the parameters cannot
   be predetermined. The parameters are therefore extremely unlikely to
   be susceptible to future special-purpose attacks, and no trapdoors
   can have been placed in the parameters during their generation”.

   Considering the stated purpose I would have expected the seed to be
   some small value like … “6F” and for all smaller values to fail the
   test. Anything else would have suggested that they tested a large
   number of values, and thus the parameters could embody any
   undisclosed mathematical characteristic whos rareness is only
   bounded by how many times they could run sha1 and test.

   I now personally consider this to be smoking evidence that the
   parameters are cooked. Maybe they were only cooked in ways that make
   them stronger? Maybe

   SECG also makes a somewhat curious remark:

   “The elliptic curve domain parameters over (primes) supplied at each
   security level typically consist of examples of two different types
   of parameters — one type being parameters associated with a Koblitz
   curve and the other type being parameters chosen verifiably at
   random — although only verifiably random parameters are supplied at
   export strength and at extremely high strength.”

   The fact that only “verifiably random” are given for export strength
   would seem to make more sense if you cynically read “verifiably
   random” as backdoored to all heck. (though it could be more
   innocently explained that the performance improvements of Koblitz
   wasn’t so important there, and/or they considered those curves weak
   enough to not bother with the extra effort required to produce the
   Koblitz curves).


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-29 Thread James A. Donald
Gregory Maxwell on the Tor-talk list has found that NIST approved 
curves, which is to say NSA approved curves, were not generated by the 
claimed procedure, which is a very strong indication that if you use 
NIST curves in your cryptography, NSA can read your encrypted data.


As computing power increases, NSA resistant RSA key have become 
inconveniently large, so have to move to EC keys.


NIST approved curves are unlikely to be NSA resistant.

Therefore, everyone should use Curve25519, which we have every reason to 
believe is unbreakable.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-29 Thread Viktor Dukhovni
On Mon, Sep 30, 2013 at 10:07:14AM +1000, James A. Donald wrote:

 Therefore, everyone should use Curve25519, which we have every
 reason to believe is unbreakable.

Superceded by the improved Curve1174.

http://cr.yp.to/elligator/elligator-20130527.pdf 

-- 
Viktor.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread John Gilmore
 And the problem appears to be compounded by dofus legacy implementations
 that don't support PFS greater than 1024 bits. This comes from a
 misunderstanding that DH keysizes only need to be half the RSA length.
 
 So to go above 1024 bits PFS we have to either
 
 1) Wait for all the servers to upgrade (i.e. never do it because the won't
 upgrade)
 
 2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits
 or above'.

Can the client recover and do something useful when the server has a
buggy (key length limited) implementation?  If so, a new cipher suite
ID is not needed, and both clients and servers can upgrade asynchronously,
getting better protection when both sides of a given connection are
running the new code.

In the case of (2) I hope you mean yes we really do PFS with an
unlimited number of bits.  1025, 2048, as well as 16000 bits should work.

John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread Phillip Hallam-Baker
On Fri, Sep 27, 2013 at 3:59 AM, John Gilmore g...@toad.com wrote:

  And the problem appears to be compounded by dofus legacy implementations
  that don't support PFS greater than 1024 bits. This comes from a
  misunderstanding that DH keysizes only need to be half the RSA length.
 
  So to go above 1024 bits PFS we have to either
 
  1) Wait for all the servers to upgrade (i.e. never do it because the
 won't
  upgrade)
 
  2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits
  or above'.

 Can the client recover and do something useful when the server has a
 buggy (key length limited) implementation?  If so, a new cipher suite
 ID is not needed, and both clients and servers can upgrade asynchronously,
 getting better protection when both sides of a given connection are
 running the new code.


Actually, it turns out that the problem is that the client croaks if the
server tries to use a key size that is bigger than it can handle. Which
means that there is no practical way to address it server side within the
current specs.



 In the case of (2) I hope you mean yes we really do PFS with an
 unlimited number of bits.  1025, 2048, as well as 16000 bits should work.


There is no reason to use DH longer than the key size in the certificate
and no reason to use a shorter DH size either.

Most cryptolibraries have a hard coded limit at 4096 bits and there are
diminishing returns to going above 2048. Going from 4096 to 8192 bits only
increases the work factor by a very small amount and they are really slow
which means we end up with DoS considerations.

We really need to move to EC above RSA. Only it is going to be a little
while before we work out which parts have been contaminated by NSA
interference and which parts are safe from patent litigation. RIM looks set
to collapse with or without the private equity move. The company will be
bought with borrowed money and the buyers will use the remaining cash to
pay themselves a dividend. Mitt Romney showed us how that works.

We might possibly get lucky and the patents get bought out by a white
knight. But all the mobile platform providers are in patent disputes right
now and I can't see it likely someone will plonk down $200 million for a
bunch of patents and then make the crown jewels open.


Problem with the NSA is that its Jekyll and Hyde. There is the good side
trying to improve security and the dark side trying to break it. Which side
did the push for EC come from?




-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread Viktor Dukhovni
On Fri, Sep 27, 2013 at 11:23:27AM -0400, Phillip Hallam-Baker wrote:

 Actually, it turns out that the problem is that the client croaks if the
 server tries to use a key size that is bigger than it can handle. Which
 means that there is no practical way to address it server side within the
 current specs.

Or smaller (e.g. GnuTLS minimum client-side EDH strength).  And
given that with EDH there is as yet no TLS extension that allows
the client to advertise the range of supported EDH key lengths (
with EECDH the client can communicate supported curves), there is
no timely incremental path to stronger EDH parameters.

In addition to the protocol obstacles we also have API obstacles,
since the protocol values need to be communicated to applications
that provide appropriate parameters for the selected strength
(EDH or EECDH).

In OpenSSL 1.0.2 there is apparently a new interface for server-side
EECDH curve selection that takes client capabilities into account.
For EDH there is need for an appropriate new extension, and new
interfaces to pass the parameters to the server application.

Deploying more capable software will take a O(10 years).  We could
perhaps get there a bit faster, if the toolkits selected from a
fixed set of suitable parameters and did not require application
changes, but this has the drawback of juicier targets for cryptanalysis.

So multiple things need to be done:

- For now enable 1024-bit EDH with different parameters at each server,
  changed from time to time.  Avoid non-interoperable parameter choices,
  that is counter-productive.

- Publish a new TLS extension that allows clients to publish supported
  EDH parameter sizes.  Extend TLS toolkit APIs to expose this range
  to the server application.  Upgrade toolkit client software to advertise
  the supported EDH parameter range.

- Enable EECDH with secp256r1 (and friends) unless it is
  reasonably believed to be cooked for efficient DLP by its creators.

- Standardize new EECDH curves (e.g. DJB's Curve1174).

-- 
Viktor.

P. S.

For SMTP transport security deploy DNSSEC and DANE TLSA.  I'm hoping
at least one of the larger service providers will do this in the
not too distant future.

Postfix (2.11 official release 2.11) will support this in early
2014.  Exim will take a bit longer, as they're cutting a release
now, and the DANE support is not yet there.  The other MTAs will
I hope follow along in due course.

The SMTP backbone (inter-domain SMTP via MX records, ...) can be
upgraded to use downgrade-resistant authenticated TLS.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread ianG

On 27/09/13 18:23 PM, Phillip Hallam-Baker wrote:


Problem with the NSA is that its Jekyll and Hyde. There is the good side
trying to improve security and the dark side trying to break it. Which
side did the push for EC come from?



What's in Suite A?  Will probably illuminate that question...



iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread James A. Donald

On 2013-09-28 01:23, Phillip Hallam-Baker wrote:


Most cryptolibraries have a hard coded limit at 4096 bits and there 
are diminishing returns to going above 2048. Going from 4096 to 8192 
bits only increases the work factor by a very small amount and they 
are really slow which means we end up with DoS considerations.


We really need to move to EC above RSA. Only it is going to be a 
little while before we work out which parts have been contaminated by 
NSA interference and which parts are safe from patent litigation. RIM 
looks set to collapse with or without the private equity move. The 
company will be bought with borrowed money and the buyers will use the 
remaining cash to pay themselves a dividend. Mitt Romney showed us how 
that works.


We might possibly get lucky and the patents get bought out by a white 
knight. But all the mobile platform providers are in patent disputes 
right now and I can't see it likely someone will plonk down $200 
million for a bunch of patents and then make the crown jewels open.



Problem with the NSA is that its Jekyll and Hyde. There is the good 
side trying to improve security and the dark side trying to break it. 
Which side did the push for EC come from?


In fact we do know this.

NSA NIST claimed that their EC curves are provably random (therefore not 
backdoored)


In fact, they are provably non random, selected on an unrevealed basis, 
which contradiction is, under the circumstances, compelling evidence 
that the NIST curves are in fact backdoored.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-26 Thread Peter Fairbrother

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a good enough security delivered to more people.

We're talking multiple orders of magnitude here.  The math that counts is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to 
information i, and I_i is the importance of keeping information i 
protected.


Actually it's more complex than that, as the importance isn't a linear 
variable, and information isn't either - but there's a start.


Increasing i by increasing users may have little effect on the overall 
security, if the protecting the information they transmit isn't 
particularly valuable.



And saying that something is secure - which is what people who are not 
cryptographers think you are doing when you recommend that something - 
tends to increase I_i, the importance of the information to be protected.


And if the new system isn't secure against expensive attacks, then 
overall security may be lessened by it's introduction. Even if Users are 
increased.





I have about 30 internet passwords, only three of which are in any way 
important to me - those are the banking ones. I use a simple password 
for all the rest, because I don't much care if they are compromised.


But I use the same TLS for all these sites.

Now if that TLS is broken as far as likely attacks against the banks go, 
I care. I don't much care if it's secure against attacks against the 
other sites like my electricity and gas bills.


I might use TLS a lot more for non-banking sites, but I don't really 
require it to be secure for those. I do require it to be secure for banking.



And I'm sure that some people would like TLS to be secure against the 
NSA for, oh, let's say 10 years. Which 1024-bit DHE will not provide.






If you really want to recommend 1024-bit DHE, then call a spade a spade 
- for a start, it's EKS, ephemeral key setup. It doesn't offer much in 
the way of forward secrecy, and it offers nothing at all in the way of 
perfect forward secrecy.


It's a political stunt to perhaps make trawling attacks by NSA more 
expensive (in cases where the website has given NSA the master keys [*]) 
- but it may make targeted attacks by NSA cheaper and easier.


And in ten years NSA *will* be able to read all your 1024-bit DHE 
traffic, which it is storing right now against the day.




[*] does anyone else think it odd that the benefit of introducing 
1024-bit DHE, as opposed to 2048-bit RSA, is only active when the 
webserver has given or will give NSA the keys? Just why is this being 
considered for recommendation?


Yes, stunt.

-- Peter Fairbrother




iang


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-26 Thread ianG

On 26/09/13 02:24 AM, Peter Fairbrother wrote:

On 25/09/13 17:17, ianG wrote:

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit
of a good enough security delivered to more people.

We're talking multiple orders of magnitude here.  The math that counts
is:

Security = Users * Protection.


No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.



I'm sorry, I don't deal in omniscience.  Typically we as suppliers of 
some security product have only the faintest idea what our users are up 
to.  (Some consider this a good thing, it's a privacy quirk.)


With that assumption, the various i's you list become some sort of 
average.  This is why the security model that is provided is typically 
one-size-fits-all, and the most successful products are typically the 
ones with zero configuration and the best fit for the widest market.




Actually it's more complex than that, as the importance isn't a linear
variable, and information isn't either - but there's a start.

Increasing i by increasing users may have little effect on the overall
security, if the protecting the information they transmit isn't
particularly valuable.



Right, and you know that, how?

(how valuable each person's info is, I mean.)



And saying that something is secure - which is what people who are not
cryptographers think you are doing when you recommend that something -
tends to increase I_i, the importance of the information to be protected.



2nd order effects from the claim of security, granted.  Which effects 
they are, is again subject to the law of averages.




And if the new system isn't secure against expensive attacks, then
overall security may be lessened by it's introduction. Even if Users are
increased.



Ah, and therein lies the rub.  Maybe.  This doesn't mean it will.

Typically, the fallacy of false sense of security relies on an extremely 
unusual or difficult attack (aka acceptable risk).  And then ramps up 
that rarity to a bogeyman status.  So that everyone is scared of it. 
And we must, we simply must protect people against it!


Get back to science.  How risky are these things?



I have about 30 internet passwords, only three of which are in any way
important to me - those are the banking ones. I use a simple password
for all the rest, because I don't much care if they are compromised.

But I use the same TLS for all these sites.

Now if that TLS is broken as far as likely attacks against the banks go,
I care. I don't much care if it's secure against attacks against the
other sites like my electricity and gas bills.



(You'll see this play out in phishing.  Banks are the number one target 
for attacks on secure browsing.)



I might use TLS a lot more for non-banking sites, but I don't really
require it to be secure for those. I do require it to be secure for
banking.



You are resting on taught wisdom about TLS, which is oriented to a 
different purpose than security.


In practice, a direct attack against TLS is very rare, a direct attack 
against your browser connection to your bank is very rare, and a direct 
attack against your person is also very rare.


This is why for example we walk the streets without body armour, even in 
Nairobi (this week) or the Beltway (11 years ago).  This is why there 
are few if any (open question?) reported breaches of banks due to the 
BEAST and other menagerie attacks against TLS.


We can look at this many ways, but one way is this:  the margin of fat 
in TLS is obscene.  If it were sentient, it would be beyond obese, it 
would be a circus act.  We can do some dieting.




And I'm sure that some people would like TLS to be secure against the
NSA for, oh, let's say 10 years. Which 1024-bit DHE will not provide.



Well, right.  So, as TLS is supposed to be primarily (these days) 
focussed on protecting your bank account access, and as its auth model 
fails dismally when it comes to phishing, why do we care about something 
so exotic as the NSA?


Get back to basics.  Let's fix the TLS so it actually does the client - 
webserver auth problem first.


1024 is good enough for that, for now, but in the meantime prepare for 
something longer.  (We now have evidence of some espionage spear 
phishing that bothered to crunch 512.  Oh happy day, some real evidence!)


As for the NSA, actually, 1024 works fine for that too, for now.  As 
long as we move them from easy decryption to actually having to use a 
lot of big fat expensive machines, we win.  They then have to focus, 
rather than harvest.  Presumably they have not forgotten how to do that.




If you 

Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Ralph Holz
Hi,

On 09/23/2013 10:47 AM, Peter Gutmann wrote:

 I'm inclined to agree with you, but you might be interested/horrified in the
 1024 bits is enough for anyone debate currently unfolding on the TLS list:
 
 That's rather misrepresenting the situation.  It's a debate between two
 groups, the security practitioners, we'd like a PFS solution as soon as we
 can, and given currently-deployed infrastructure DH-1024 seems to be the best
 bet, and the theoreticians, only a theoretically perfect solution is
 acceptable, even if it takes us forever to get it.
 
 (You can guess from that which side I'm on).

Are you talking about the BCP? Then what you say is not true either.

1) General consensus seems to be that recommending DHE-2048 is not a
good idea in the BCP, because it will not be available now, nor in short
to mid-range time. Voices that utter different opinions are currently a
minority; the BCP authors are not among them.

2) Consequently, the BCP effort is currently on deciding whether a ECC
variant of DHE or DHE-1024 should be the recommendation. The factions
seem to be split about equally:

Pro DHE-1024:
* Some say not enough systems provide ECDHE to recommend it, and thus
DHE1024 should be the primary recommendation.
* Some say ECDHE is not trustworthy yet due to implementation
difficulties and/or NSA involvement.

Pro ECDHE:
* Others say Chrome and Firefox will soon, or already do, support ECDHE
it. That would leave only the Windows users on IE, and we know that
Windows 8.1 will support it.
* The same people acknowledge the trustworthy argument. The question
is whether it weighs heavily enough.

That seems to be a more accurate description as I understand it from
reading the list. Myself, I am currently still undecided on the issue
but tend slightly towards ECDHE for now -- with any luck, the BCP won't
be ready until we have some more data on the issue.

Ralph


-- 
Ralph Holz
I8 - Network Architectures and Services
Technische Universität München
http://www.net.in.tum.de/de/mitarbeiter/holz/
Phone +49.89.289.18043
PGP: A805 D19C E23E 6BBB E0C4  86DC 520E 0C83 69B0 03EF
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Phillip Hallam-Baker
On Sun, Sep 22, 2013 at 2:00 PM, Stephen Farrell
stephen.farr...@cs.tcd.iewrote:



 On 09/22/2013 01:07 AM, Patrick Pelletier wrote:
  1024 bits is enough for anyone

 That's a mischaracterisation I think. Some folks (incl. me)
 have said that 1024 DHE is arguably better that no PFS and
 if current deployments mean we can't ubiquitously do better,
 then we should recommend that as an option, while at the same
 time recognising that 1024 is relatively short.


And the problem appears to be compounded by dofus legacy implementations
that don't support PFS greater than 1024 bits. This comes from a
misunderstanding that DH keysizes only need to be half the RSA length.

So to go above 1024 bits PFS we have to either

1) Wait for all the servers to upgrade (i.e. never do it because the won't
upgrade)

2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits
or above'.


I suggest (2)

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread ianG

On 24/09/13 19:23 PM, Kelly John Rose wrote:


I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.



The trap of a false sense of security is far outweighed by the benefit 
of a good enough security delivered to more people.


We're talking multiple orders of magnitude here.  The math that counts is:

   Security = Users * Protection.



iang


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Peter Gutmann
Stephen Farrell stephen.farr...@cs.tcd.ie writes:

That's a mischaracterisation I think. Some folks (incl. me) have said that
1024 DHE is arguably better that no PFS and if current deployments mean we
can't ubiquitously do better, then we should recommend that as an option,
while at the same time recognising that 1024 is relatively short.

+1.

Peter.


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Peter Gutmann
Peter Fairbrother zenadsl6...@zen.co.uk writes:
On 24/09/13 05:27, Peter Gutmann wrote:
 Peter Fairbrother zenadsl6...@zen.co.uk writes:
 If you just want a down-and-dirty 2048-bit FS solution which will work 
 today,
 why not just have the websites sign a new RSA-2048 sub-certificate every 
 day?
 Or every few hours? And delete the secret key, of course.

 ... and I guess that puts you firmly in the theoretical/impractical camp.
 Would you care to explain how this is going to work within the TLS protocol?

I'm not sure I understand you.

Something that can sign a new RSA-2048 sub-certificate is called a CA.  For 
a browser, it'll have to be a trusted CA.  What I was asking you to explain is 
how the browsers are going to deal with over half a billion (source: Netcraft 
web server survey) new CAs in the ecosystem when websites sign a new RSA-2048 
sub-certificate.

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread Viktor Dukhovni
On Sat, Sep 21, 2013 at 05:07:02PM -0700, Patrick Pelletier wrote:

 and there was a similar discussion on the OpenSSL list recently,
 with GnuTLS getting blamed for using the ECRYPT recommendations
 rather than 1024:
 
 http://www.mail-archive.com/openssl-users@openssl.org/msg71899.html

GnuTLS is reasonably sound engineering in electing 2048-bit groups
by default on the TLS server.  This inter-operates with the majority
of clients, all the client has to do is to NOT artificially limit
its implementation to 1024 bit EDH.

GnuTLS fails basic engineering principles when it sets a lower
bound of 2048-bit EDH in its TLS client code.  TLS clients do not
negotiate the DH parameters, only the use of EDH, and most server
implementations deployed today will offer 1024-bit EDH groups even
when the symmetric cipher key length is substantially stronger.

Having GnuTLS clients fail to connect to most servers, (and e.g.
with opportunistic TLS SMTP failing over to plain-text as a result)
is not helping anyone!

To migrate the world to stronger EDH, the GnuTLS authors should
work with the other toolkit implementors in parallel with and
through the IETF to get all servers to move to stronger groups.
Once that's done, and the updated implementations are widely deployed
raise the client minimum EDH group sizes.

Unilaterally raising the client lower-bound is just, to put it
bluntly, pissing into the wind.

-- 
Viktor.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread Stephen Farrell


On 09/22/2013 01:07 AM, Patrick Pelletier wrote:
 1024 bits is enough for anyone

That's a mischaracterisation I think. Some folks (incl. me)
have said that 1024 DHE is arguably better that no PFS and
if current deployments mean we can't ubiquitously do better,
then we should recommend that as an option, while at the same
time recognising that 1024 is relatively short.

S.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread Bill Frantz
On 9/21/13 at 5:07 PM, c...@funwithsoftware.org (Patrick 
Pelletier) wrote:


I'm inclined to agree with you, but you might be 
interested/horrified in the 1024 bits is enough for anyone 
debate currently unfolding on the TLS list:


http://www.ietf.org/mail-archive/web/tls/current/msg10009.html


I think that this comment is a serious misinterpretation of the 
discussion on the TLS list.


The RFC under discussion is a Best Current Practices (BCP) RFC. 
Some people, including me, think that changes to the protocol or 
current implementations of the protocol are out of scope for a 
BCP document.


There are several implementations of TLS which will only do 1024 
bit Diffie-Hellman ephemeral (DHE)[1]. The question as I see it 
is: Are we better off recommending forward security with 1024 
bit DHE, with the possibility that large organizations can brute 
force it; or using the technique of having the client encrypt 
the keying material with the server's RSA key with the 
probability that the same large organizations have acquired the 
server's secret key.


Now there are good arguments on both sides.

The nearly complete database of who talks to who allows 
interesting communications [2] to be singled out for attacks 
on the 1024 bit DHE. Cracking all the DHE exchanges is probably 
more work than these large organizations can do with current 
technology. However, it is almost certain that these sessions 
will be readable in the not too distant future.


It is widely believed that most large sites have had their RSA 
secret keys compromised, which makes all these sessions are 
trivially readable.


I think that the vast majority of TLS list commenters want to 
have TLS 1.3 include fixes for the problems that have been 
identified. However, getting TLS 1.3 approved is at least a 
year, and getting it through the FIPS process will add at least 
another year. We already know that these large organizations 
work to delay better crypto, sometimes using the argument that 
we should wait for the perfect solution rather than 
incrementally adopt better solutions in the mean time.


Cheers - Bill

[1] Implementations which will only do 1024 bit DHE are said to 
include: Apache with OpenSSL, Java, and Windows crypt libraries 
(used by Internet Explorer). If longer keys are used by the 
other side, they abort the connection attempt.


[2] I actually believe NSA when they say they aren't interested 
in grandma's cookie recipe. I am, but I like good cookies. :0)


---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread Peter Gutmann
Patrick Pelletier c...@funwithsoftware.org writes:

I'm inclined to agree with you, but you might be interested/horrified in the
1024 bits is enough for anyone debate currently unfolding on the TLS list:

That's rather misrepresenting the situation.  It's a debate between two
groups, the security practitioners, we'd like a PFS solution as soon as we
can, and given currently-deployed infrastructure DH-1024 seems to be the best
bet, and the theoreticians, only a theoretically perfect solution is
acceptable, even if it takes us forever to get it.

(You can guess from that which side I'm on).

Peter.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread Peter Fairbrother

On 23/09/13 09:47, Peter Gutmann wrote:

Patrick Pelletier c...@funwithsoftware.org writes:


I'm inclined to agree with you, but you might be interested/horrified in the
1024 bits is enough for anyone debate currently unfolding on the TLS list:


That's rather misrepresenting the situation.  It's a debate between two
groups, the security practitioners, we'd like a PFS solution as soon as we
can, and given currently-deployed infrastructure DH-1024 seems to be the best
bet, and the theoreticians, only a theoretically perfect solution is
acceptable, even if it takes us forever to get it.

(You can guess from that which side I'm on).


Lessee - a forward secrecy solution which either doesn't work now or 
won't work soon - so that it probably won't protect traffic made now for 
it's useful lifetime - versus - well, who said anything about 
theoretically perfect?


To hell with perfect. I won't even use the word when describing forward 
secrecy (unless it's an OTP).


If you just want a down-and-dirty 2048-bit FS solution which will work 
today, why not just have the websites sign a new RSA-2048 
sub-certificate every day? Or every few hours? And delete the secret 
key, of course.


Forward secrecy doesn't have to be per-session.


Though frankly, I don't think ubiquitous 1024-bit FS without deployment 
of some software/RFC/standard is possible, and if so that deployment 
should also include a 2048-bit solution as well. And maybe 3072-bit and 
4096-bit solutions too.


And please please please don't call them all the same thing - because 
they aren't.




But, the immediate question before the court of TLS now is - do we 
recommend a 1024-bit FS solution?


And I for one cannot say that you should. In fact I would be horrified 
if you did.



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread David Kuehling
 Patrick == Patrick Pelletier c...@funwithsoftware.org writes:

 On 9/14/13 11:38 AM, Adam Back wrote:

 Tin foil or not: maybe its time for 3072 RSA/DH and 384/512 ECC?

 I'm inclined to agree with you, but you might be interested/horrified
 in the 1024 bits is enough for anyone debate currently unfolding on
 the TLS list:

 http://www.ietf.org/mail-archive/web/tls/current/msg10009.html

I'm even more horrified, that the Apache webserver uses 1024-bit Diffie
Hellman exchange for TLS/SSL with no way to increase group size other
than modifying and recompiling the sources.  Now that everybody calls
for website operators to enable perfect-forward secrecy, we may in fact
see an overall security downgrade.

  http://grokbase.com/t/apache/dev/1393kx4qn8/
  http://blog.ivanristic.com/2013/08/increasing-dhe-strength-on-apache.html

(Of course you can also get PFS via ECDHE, but many production webserver
installations run older openssl versions that only support DHE)

David
-- 
GnuPG public key: http://dvdkhlng.users.sourceforge.net/dk2.gpg
Fingerprint: B63B 6AF2 4EEB F033 46F7  7F1D 935E 6F08 E457 205F


pgpUBN3vC235n.pgp
Description: PGP signature
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread ianG

On 22/09/13 03:07 AM, Patrick Pelletier wrote:

On 9/14/13 11:38 AM, Adam Back wrote:


Tin foil or not: maybe its time for 3072 RSA/DH and 384/512 ECC?


I'm inclined to agree with you, but you might be interested/horrified in
the 1024 bits is enough for anyone debate currently unfolding on the
TLS list:

http://www.ietf.org/mail-archive/web/tls/current/msg10009.html



1024 bits is pretty good, and there's some science that says it's about 
right.  E.g., risk management says there is little point in making a 
steel door inside a wicker frame.


The problem is more to do with distraction than anything else.  It is a 
problem that people will argue about the numbers, because they can 
compare numbers, far more than they will argue about the essentials. 
There is a psychological bias to beat ones chest about how tough one is 
on the numbers, and thus prove one is better at this game than the enemy.


Unfortunately, in cryptography, almost always, other factors matter more.

So, while you're all arguing about 1024 versus 4096, what you're not 
doing is delivering a good system.  That delay feeds in to the customer 
equation, and the result is less security.  Even when you finally 
compromise on 1964.13 bits, the result is still less security, because 
of other issues like delays.




and there was a similar discussion on the OpenSSL list recently, with
GnuTLS getting blamed for using the ECRYPT recommendations rather than
1024:

http://www.mail-archive.com/openssl-users@openssl.org/msg71899.html



Yeah, they are getting confused (compatibility failures) from too much 
choice.  Never a good idea.  Take out the choice.  One number.  Get back 
to work.




iang

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-24 Thread Peter Gutmann
Peter Fairbrother zenadsl6...@zen.co.uk writes:

If you just want a down-and-dirty 2048-bit FS solution which will work today,
why not just have the websites sign a new RSA-2048 sub-certificate every day?
Or every few hours? And delete the secret key, of course.

... and I guess that puts you firmly in the theoretical/impractical camp.
Would you care to explain how this is going to work within the TLS protocol?
It's easy enough to throw out these hypothetical what-if's (gimme ten minutes
and I'll dream up half a dozen more, all of them theoretically OK, none of
them feasible), but they need to actually be deployable in the real world, and
that's what's constraining the current debate.

Peter.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-22 Thread Patrick Pelletier

On 9/14/13 11:38 AM, Adam Back wrote:


Tin foil or not: maybe its time for 3072 RSA/DH and 384/512 ECC?


I'm inclined to agree with you, but you might be interested/horrified in 
the 1024 bits is enough for anyone debate currently unfolding on the 
TLS list:


http://www.ietf.org/mail-archive/web/tls/current/msg10009.html

and there was a similar discussion on the OpenSSL list recently, with 
GnuTLS getting blamed for using the ECRYPT recommendations rather than 
1024:


http://www.mail-archive.com/openssl-users@openssl.org/msg71899.html

--Patrick

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-21 Thread Ben Laurie
On 18 September 2013 22:23, Lucky Green shamr...@cypherpunks.to wrote:

 According to published reports that I saw, NSA/DoD pays $250M (per
 year?) to backdoor cryptographic implementations. I have knowledge of
 only one such effort. That effort involved DoD/NSA paying $10M to a
 leading cryptographic library provider to both implement and set as
 the default the obviously backdoored Dual_EC_DRBG as the default RNG.


Surprise! The leading blah blah was RSA:
http://stream.wsj.com/story/latest-headlines/SS-2-63399/SS-2-332655/.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] [cryptography] RSA equivalent key length/strength

2013-09-19 Thread Joachim Strömbergson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Aloha!

Lucky Green wrote:
 Moti Young and others wrote a book back in the 90's (or perhaps)
 80's, that detailed the strength of various RSA key lengths over
 time. I am too lazy to look up the reference or locate the book on my
 bookshelf. Moti: help me out here? :-)

Can't help out with that. But I think that the ECRYPY Yearly Reports on
keylengths and algorithms are a great source for these kinds of
questions. The latest (from 2012) can be found here:

http://www.ecrypt.eu.org/documents/D.SPA.20.pdf

Unfortunately ECRYPY II has come to an end and I'm not certain the
report will be updated anymore. Would be a loss since having updated
estimates on keys and what algorithms to use is really helpful (IMHO).

- -- 
Med vänlig hälsning, Yours

Joachim Strömbergson - Alltid i harmonisk svängning.

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlI6onIACgkQZoPr8HT30QGuSgCgq31OzxzE5u931sIpY/XMs5Ry
dwAAniAkW7jGfLnakNqGnjhhm37vfELm
=Iqvv
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-19 Thread Phillip Hallam-Baker
On Wed, Sep 18, 2013 at 5:23 PM, Lucky Green shamr...@cypherpunks.towrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 2013-09-14 08:53, Peter Fairbrother wrote:

  I get that 1024 bits is about on the edge, about equivalent to 80
  bits or a little less, and may be crackable either now or sometime
  soon.

 Moti Young and others wrote a book back in the 90's (or perhaps) 80's,
 that detailed the strength of various RSA key lengths over time. I am
 too lazy to look up the reference or locate the book on my bookshelf.
 Moti: help me out here? :-)

 According to published reports that I saw, NSA/DoD pays $250M (per
 year?) to backdoor cryptographic implementations. I have knowledge of
 only one such effort. That effort involved DoD/NSA paying $10M to a
 leading cryptographic library provider to both implement and set as
 the default the obviously backdoored Dual_EC_DRBG as the default RNG.

 This was $10M wasted. While this vendor may have had a dominating
 position in the market place before certain patents expired, by the
 time DoD/NSA paid the $10M, few customers used that vendor's
 cryptographic libraries.

 There is no reason to believe that the $250M per year that I have seen
 quoted as used to backdoor commercial cryptographic software is spent
 to any meaningful effect.


The most corrosive thing about the whole affair is the distrust it has sewn.

I know a lot of ex-NSA folk and none of them has ever once asked me to drop
a backdoor. And I have worked very closely with a lot of government
agencies.


Your model is probably wrong. Rather than going out to a certain crypto
vendor and asking them to drop a backdoor, I think they choose the vendor
on the basis that they have a disposition to a certain approach and then
they point out that given that they have a whole crypto suite based on EC
wouldn't it be cool to have an EC based random number generator.

I think that the same happens in IETF. I don't think it very likely Randy
Bush was bought off by the NSA when he blocked deployment of DNSSEC for ten
years by killing OPT-IN. But I suspect that a bunch of folk were whispering
in his ear that he needed to be strong and resist what was obviously a
blatant attempt at commercial sabotage etc. etc.


I certainly think that the NSA is behind the attempt to keep the Internet
under US control via ICANN which is to all intents a quango controlled by
the US government. For example, ensuring that the US has the ability to
impose a digital blockade by dropping a country code TLD out of the root.
Right now that is a feeble threat because ICANN would be over in a minute
if they tried. But deployment of DNSSEC will give them the power to do that
and make it stick (and no, the key share holders cannot override the veto,
the shares don't work without the key hardware).

A while back I proposed a scheme based on a quorum signing proposal that
would give countries like China and Brazil the ability to assure themselves
that they were not subjected to the threat of future US capture. I have
also proposed that countries have a block of IPv6 and BGP-AS space assigned
as a 'Sovereign Reserve'. Each country would get a /32 which is more than
enough to allow them to ensure that an artificial shortage of IPv6
addresses can't be used as a blockade. If there are government folk reading
this list who are interested I can show them how to do it without waiting
on permission from anyone.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-18 Thread Lucky Green
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-09-14 08:53, Peter Fairbrother wrote:

 I get that 1024 bits is about on the edge, about equivalent to 80
 bits or a little less, and may be crackable either now or sometime
 soon.

Moti Young and others wrote a book back in the 90's (or perhaps) 80's,
that detailed the strength of various RSA key lengths over time. I am
too lazy to look up the reference or locate the book on my bookshelf.
Moti: help me out here? :-)

According to published reports that I saw, NSA/DoD pays $250M (per
year?) to backdoor cryptographic implementations. I have knowledge of
only one such effort. That effort involved DoD/NSA paying $10M to a
leading cryptographic library provider to both implement and set as
the default the obviously backdoored Dual_EC_DRBG as the default RNG.

This was $10M wasted. While this vendor may have had a dominating
position in the market place before certain patents expired, by the
time DoD/NSA paid the $10M, few customers used that vendor's
cryptographic libraries.

There is no reason to believe that the $250M per year that I have seen
quoted as used to backdoor commercial cryptographic software is spent
to any meaningful effect.

- ---Lucky

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (MingW32)

iQIcBAEBAgAGBQJSOhm/AAoJEARVjUj9NCi09/wP/jlBE78qlZdPctkhXXC8CblP
oOYD7OhrxP5eaI6UVHN8gJBZidrYZmGp6a9bGLtqzLZmx1L2DEhrKojyUy8lic71
LyZs2ulIY6GU87xr4k7w9ce25+WvK7LviGCjq1WfRxJtmoTSUpNcRI/CNHHnueWE
lGKFip0RVS0YPnVvgQ5pvDmJUW+2vb/4xi6cx592TaQKmgRQoY7gsFCDwuJsy3K/
OUhaEoM6OIMkboCU7CAtC7w1sqP+6GnDg0ZEUvZ8ILFkPYKyEGgJ+RNiUNOsMlIt
dvCEgmT1jL6tgZWHfByAfsYN54uWp5QMuL277ZKlvcjF2KNN7cPcsA4Y76RxTH7L
7gnzB2aQUn13O4dsmsB/54Mbg2Y+LvYCa40Q5RIi45evjANxx9Bx3sFMr3HbXArO
ijT081OUy6/tZYDXruWHlh3j4RAYp/fHecm4/25pQS9NrTAQnaCCnGMHRiP6/PEw
GtMWsW3TkDts3h41HJ5wU6Rppr1hX3pRn1ZgeNJbtRezcC4BL0pTX0PEZ76bhXze
231wdd15/Fnb8e30HbWLUiRSLKAbY4KWSdfdZBRzty7ZiiCTI3O8vQLg2Ld0iR1a
VYZX+hySDG8ZJvj8qoN3d7AR9q7WJk554aOSGb1ooBafm4YjbzAVglu/Afi48DHu
xZFLLjzDm5BhP5eGvsli
=TsSq
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-16 Thread Tero Kivinen
ianG writes:
 On 14/09/13 18:53 PM, Peter Fairbrother wrote:
  But, I wonder, where do these longer equivalent figures come from?
 
 http://keylength.com/ (is a better repository to answer your question.)

I assume that web site only takes account of time, it does not base
its calculations to cost of doing cracking, which would also include
the space needed to do the actual calculations.

Old paper from year 2000 which takes also space calculations in to
account

http://www.emc.com/emc-plus/rsa-labs/historical/a-cost-based-security-analysis-key-lengths.htm

says that to crack 1620 bit RSA key you need 10^10 years, with 158000
machines each having 1.2*10^14 bytes (120 Tb) of memory (year 2000 $10
trillion estimate).

Cost of that amount of memory today would still be quite high (at
$3-$10 per GB, the price would be hundreds of thousands - over million
dollars per machine).

Most of key size calculations in the net only take account the time
needed, not the space at all, thus they assume that memory is free.
For symmetric crypto cracking that is true, as you do not need that
much of memory, for public keys that is not true for some of the
algoritms.
-- 
kivi...@iki.fi
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Peter Fairbrother
Recommendations are given herein as: symmetric_key_length - 
recommended_equivalent_RSA_key_length, in bits.


Looking at Wikipedia,  I see:

As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in 
strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit 
symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys. RSA 
claims that 1024-bit keys are likely to become crackable some time 
between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. 
An RSA key length of 3072 bits should be used if security is required 
beyond 2030.[6]


http://www.emc.com/emc-plus/rsa-labs/standards-initiatives/key-size.htm

That page doesn't give any actual recommendations or long-term dates 
from RSA now. It gives the traditional recommendations 80 - 1024 and 
112 - 2048, and a 2000 Lenstra/Verheul minimum commercial 
recommendation for 2010 of 78 - 1369.



NIST key management guidelines further suggest that 15360-bit RSA keys 
are equivalent in strength to 256-bit symmetric keys.[7]


http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf

NIST also give the traditional recommendations, 80 - 1024 and 112 - 
2048, plus 128 - 3072, 192 - 7680, 256 - 15360.




I get that 1024 bits is about on the edge, about equivalent to 80 bits 
or a little less, and may be crackable either now or sometime soon.


But, I wonder, where do these longer equivalent figures come from?

I don't know, I'm just asking - and I chose Wikipedia because that's the 
general wisdom.


Is this an area where NSA have shaped the worldwide cryptography 
marketplace to make it more tractable to advanced cryptanalytic 
capabilities being developed by NSA/CSS, by perhaps greatly 
exaggerating the equivalent lengths?


And by emphasising the difficulty of using longer keys?

As I said, I do not know. I merely raise the possibility.


[ Personally, I recommend 1,536 bit RSA keys and DH primes for security 
to 2030, 2,048 if 1,536 is unavailable, 4,096 bits if paranoid/high 
value; and not using RSA at all for longer term security. I don't know 
whether someone will build that sort of quantum computer one day, but 
they might. ]



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Paul Hoffman
Also see RFC 3766 from almost a decade ago; it has stood up fairly well.

--Paul Hoffman
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Perry E. Metzger
On Sat, 14 Sep 2013 09:31:22 -0700 Paul Hoffman
paul.hoff...@vpnc.org wrote:
 Also see RFC 3766 from almost a decade ago; it has stood up fairly
 well.

For those not aware, the document, by Paul and Hilarie Orman,
discusses equivalent key strengths and practical brute force methods,
giving extensive detail on how all calculations were done.

A URL for the lazy:

http://tools.ietf.org/html/rfc3766

It is very well done. I'd like to see an update done but it does
feel like the methodology was well laid out and is difficult to
argue with in general. The detailed numbers are slightly different
from others out there, but not so much as to change the general
recommendations that have been floating around.

Their table, from April 2004, looked like this:

   +-+---+--+--+
   | System  |   |  |  |
   | requirement | Symmetric | RSA or DH| DSA subgroup |
   | for attack  | key size  | modulus size | size |
   | resistance  | (bits)| (bits)   | (bits)   |
   | (bits)  |   |  |  |
   +-+---+--+--+
   | 70  | 70|  947 | 129  |
   | 80  | 80| 1228 | 148  |
   | 90  | 90| 1553 | 167  |
   |100  |100| 1926 | 186  |
   |150  |150| 4575 | 284  |
   |200  |200| 8719 | 383  |
   |250  |250|14596 | 482  |
   +-+---+--+--+

They had some caveats, such as the statement that if TWIRL like
machines appear, we could presume an 11 bit reduction in strength --
see the RFC itself for details.

Perry
-- 
Perry E. Metzgerpe...@piermont.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Peter Fairbrother

On 14/09/13 17:14, Perry E. Metzger wrote:

On Sat, 14 Sep 2013 16:53:38 +0100 Peter Fairbrother
zenadsl6...@zen.co.uk wrote:

NIST also give the traditional recommendations, 80 - 1024 and 112
- 2048, plus 128 - 3072, 192 - 7680, 256 - 15360.

[...]

But, I wonder, where do these longer equivalent figures come from?

I don't know, I'm just asking - and I chose Wikipedia because that's
the general wisdom.

[...]

[ Personally, I recommend 1,536 bit RSA keys and DH primes for
security to 2030, 2,048 if 1,536 is unavailable, 4,096 bits if
paranoid/high value; and not using RSA at all for longer term
security. I don't know whether someone will build that sort of
quantum computer one day, but they might. ]


On what basis do you select your numbers? Have you done
calculations on the time it takes to factor numbers using modern
algorithms to produce them?


Yes, some - but I don't believe that's enough. Historically, it would 
not have been (and wasn't) - it doesn't take account of algorithm 
development.


I actually based the 1,536-bit figure on the old RSA factoring 
challenges, and how long it took to break them.


We are publicly at 768 bits now, and that's very expensive 
http://eprint.iacr.org/2010/006.pdf - and, over the last twenty years 
the rate of public advance has been about 256 bits per decade.


So at that rate 1,536 bits would become possible but very expensive in 
2043, and would still be impossible in 2030.



If 1,024 is possible but very expensive for NSA now, and 256 bits per 
decade is right, then 1,536 may just be on the verge of edging into 
possibility in 2030 - but I think progress is going to slow (unless they 
develop quantum computers).


We have already found many of the easy-to-find advances in theory.



-- Peter Fairbrother
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread ianG

On 14/09/13 18:53 PM, Peter Fairbrother wrote:


But, I wonder, where do these longer equivalent figures come from?



http://keylength.com/ (is a better repository to answer your question.)



iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] RSA equivalent key length/strength

2013-09-14 Thread Adam Back

On Sat, Sep 14, 2013 at 12:56:02PM -0400, Perry E. Metzger wrote:

http://tools.ietf.org/html/rfc3766

  | requirement | Symmetric | RSA or DH| DSA subgroup |
  | for attack  | key size  | modulus size | size |
  +-+---+--+--+
  |100  |100| 1926 | 186  |

if TWIRL like machines appear, we could presume an 11 bit reduction in
strength


100-11 = 89 bits.  Bitcoin is pushing 75 bits/year right
now with GPUs and 65nm ASICs (not sure what balance).  Does that place ~2000
bit modulus around the safety margin of 56-bit DES when that was being
argued about (the previous generation NSA key-strength sabotage)?

Anyone have some projections for the cost of a TWIRL to crack 2048 bit RSA? 
Projecting 2048 out to a 2030 doesnt seem like a hugely conservative

estimate.  Bear in mind NSA would probably be willing to drop $1b one-off to
be able to crack public key crypto for the next decade.  There have been
cost and performance, power, density improvements since TWIRL was proposed. 
Maybe the single largest employer of mathematicians can squeeze a few

incremetal optimizations of the TWIRL algorithm or implementation strategy.

Tin foil or not: maybe its time for 3072 RSA/DH and 384/512 ECC?

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography