Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-02 Thread James A. Donald

On 2010-10-01 3:23 PM, Chris Palmer wrote:

In my quantitative, non-hand-waving, repeated experience with many clients in
many business sectors using a wide array of web application technology
stacks, almost all web apps suffer a network and disk I/O bloat factor of 5,
10, 20, ...


Which does not, however, make bloated RSA keys any the less evil.

All the evils you describe get worse under https.

A badly designed https page is likely to require the client to perform 
lots and lots and lots of RSA operations in order to respond to the user 
click.


A 2048 bit operation takes around 0.01 seconds, which is insignificant. 
 But an https connection takes several such operations.  Lots of https 
connections 


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-01 Thread Samuel Neves
 On 01-10-2010 02:41, Victor Duchovni wrote:
> Should we be confident that 4-prime RSA is stronger at 2048 bits than
> 2-prime is at 1024? At the very least, it is not stronger against ECM
> (yes ECM is not effective at this factor size) and while GNFS is not
> known to benefit from small factors, is this enough evidence that
> 4-prime 2048-bit keys are effective?
>

It is slightly stronger than RSA-1024 against ECM, since ECM is then
performed modulo a 2048 bit value instead of a 1024 bit one. This slows
down arithmetic by a factor between 3 and 4 (Karatsuba vs Schoolbook
multiplication). Further, there are now 3 factors to find by ECM instead
of just 1.

Going by asymptotic complexities, factoring 4-prime RSA-2048 by NFS
should cost around 2^116 operations. Using ECM to find a 512-bit prime
costs around 2^93 elliptic curve group additions (add arithmetic cost
here). Factoring RSA-1024 by NFS costs around 2^80 operations.

Thus, I believe that 4-prime RSA-2048 is slightly easier than 2-prime
RSA-2048, but still significantly harder than RSA-1024.

Best regards,
Samuel Neves

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-01 Thread Chris Palmer
Thor Lancelot Simon writes:

> > believe that the speed of RSA is the limiting factor for web application
> 
> At 1024 bits, it is not.  But you are looking at a factor of *9* increase
> in computational cost when you go immediately to 2048 bits.

In my quantitative, non-hand-waving, repeated experience with many clients in
many business sectors using a wide array of web application technology
stacks, almost all web apps suffer a network and disk I/O bloat factor of 5,
10, 20, ...

There are these sites where page-loads incur the creation of 30 TCP
connections. Pages have 20 tiny PNGs for navigation elements, all served
over non-persistent HTTP connections with the Cache-Control: header set to
no-cache. Each page view incurs a re-load of these static images. Take a
look at those images: why are they 35KB each? Oh, they have unoptimized
color palettes and 500 bytes of worthless comments and header junk and
actually they are twice as large as they appear on screen (the developer
shrinks them on the page with height= and width= attributes). To speed up
page loads, they serve the images from 10 distinct hostnames (to trick the
browser into parallelizing the downloads more). "What's spriting?"

How long does it take the browser to compile your 500KB of JavaScript? To
run it?

Compression is not turned on. The database is choked. The web is a front-end
for an oversubscribed and badly-implemented SOAP service. (I've seen backend
messaging services where the smallest message type was 200KB.) The 80KB
JavaScript file contains 40KB of redundant whitespace and is
dynamically-generated and so uncacheable. (I usually find a few XSS bugs
while I'm at it --- good luck properly escaping user data in the context of
arbitrary JavaScript code, but never mind that...)

The .NET ViewState field and/or the cookies are huge, like 20KB (I've seen
100KB) of serialized object state. It seems fine in the office, but from
home on my asymmetric cable line, performance blows --- it takes too long to
get the huge requests to the server! And yeah, your 20 PNGs are in the same
domain as your cookie, so that huge cookie goes up on every request. Oops...

I'm sure Steven's friend is competent. A competent web developer, or a
competent network architect? I have indeed seen this 12x cost factor before.
Every single time, it was a case where nobody knew the whole story of how
the app works. (Layering and encapsulation are good for software designs,
but bad for people.) Every single time, there were obvious and blatant ways
to improve page-load latency and/or transaction throughput by a factor of 9
or 12 or more. It translates directly into dollars: lower infrastructure
costs, higher conversion rates. Suddenly SSL is free.

I'm still fully with you; it's just that of all the 9x pessimalities, the
I/O ones matter way more.

Recommended reading:

http://oreilly.com/catalog/9780596529307

http://gmailblog.blogspot.com/2008/05/need-for-speed-path-to-faster-loading.html

"""...a popular network news site's home page required about a 180 requests
to fully load... [but for Gmail] it now takes as few as four requests from
the click of the "Sign in" button to the display of your inbox"""

Performance is a security concern, not just for DoS reasons but because you
have to be able to walk the walk to convince people that your security
mechanism will work.


The concern about the impact of 2048-bit RSA on low-power devices is
well-placed. But there too, content-layer concerns dominate overall, perhaps
even moreso.

Again, I'm not waving hands: I've measured. You can measure too, the tools
are free.


-- 
http://noncombatant.org/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-01 Thread Victor Duchovni
On Thu, Sep 30, 2010 at 01:32:38PM -0400, Thor Lancelot Simon wrote:

> On Thu, Sep 30, 2010 at 05:18:56PM +0100, Samuel Neves wrote:
> > 
> > One solution would be to use 2048-bit 4-prime RSA. It would maintain the
> > security of RSA-2048, enable the reusing of the modular arithmetic units
> > of 1024 bit VLSI chips and keep ECM factoring at bay. The added cost
> > would only be a factor of ~2, instead of ~8.
> 
> This is a neat idea!  But it means changing the TLS standard, yes?

Presumably, this would only speed-up private-key operations. Public-key
operations (which is all one sees on the wire) should be the same whether
there are 2 or 4 unknown factors, one just uses the 2048-bit modulus.

Even the signing CA would not know how many primes were used to construct
the public key, provided software implementations supported 4-prime
private keys, I would naively expect the everyone else to not see any
difference.

Should we be confident that 4-prime RSA is stronger at 2048 bits than
2-prime is at 1024? At the very least, it is not stronger against ECM
(yes ECM is not effective at this factor size) and while GNFS is not
known to benefit from small factors, is this enough evidence that
4-prime 2048-bit keys are effective?

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Samuel Neves
 On 30-09-2010 18:32, Thor Lancelot Simon wrote:
> On Thu, Sep 30, 2010 at 05:18:56PM +0100, Samuel Neves wrote:
>> One solution would be to use 2048-bit 4-prime RSA. It would maintain the
>> security of RSA-2048, enable the reusing of the modular arithmetic units
>> of 1024 bit VLSI chips and keep ECM factoring at bay. The added cost
>> would only be a factor of ~2, instead of ~8.
> This is a neat idea!  But it means changing the TLS standard, yes?
>

IIRC, multi-prime RSA is already supported in standards, but not in
practice (read: OpenSSL):

http://tools.ietf.org/html/rfc3447

Best regards,
Samuel Neves

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Jack Lloyd
On Thu, Sep 30, 2010 at 01:32:38PM -0400, Thor Lancelot Simon wrote:
> On Thu, Sep 30, 2010 at 05:18:56PM +0100, Samuel Neves wrote:
> > 
> > One solution would be to use 2048-bit 4-prime RSA. It would maintain the
> > security of RSA-2048, enable the reusing of the modular arithmetic units
> > of 1024 bit VLSI chips and keep ECM factoring at bay. The added cost
> > would only be a factor of ~2, instead of ~8.
> 
> This is a neat idea!  But it means changing the TLS standard, yes?

It would not require changing the standard, since the only way to tell
that my RSA modulus N is a factor of 4 primes rather than 2 primes is
to, well, factor it. And if one can do that there are bigger issues,
of course.

However multi-prime RSA is patented in the US; by Compaq (now HP) I
believe? US patent 7231040, applied for in 1998, so in force for at
least 5 more years if not more. I don't know if there are patents on
this in non-US locales.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Thor Lancelot Simon
On Thu, Sep 30, 2010 at 05:18:56PM +0100, Samuel Neves wrote:
> 
> One solution would be to use 2048-bit 4-prime RSA. It would maintain the
> security of RSA-2048, enable the reusing of the modular arithmetic units
> of 1024 bit VLSI chips and keep ECM factoring at bay. The added cost
> would only be a factor of ~2, instead of ~8.

This is a neat idea!  But it means changing the TLS standard, yes?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Samuel Neves
On 30-09-2010 16:41, Thor Lancelot Simon wrote:
> On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:
>> Thor Lancelot Simon writes:
>> 
>>> a significant net loss of security, since the huge increase in
>>> computation required will delay or prevent the deployment of
>>> "SSL everywhere".
>> 
>> That would only happen if we (as security experts) allowed web
>> developers to believe that the speed of RSA is the limiting factor
>> for web application performance.
> 
> At 1024 bits, it is not. But you are looking at a factor of *9* 
> increase in computational cost when you go immediately to 2048 bits. 
> At that point, the bottleneck for many applications shifts, 
> particularly those which are served by offload engines specifically 
> to move the bottleneck so it's not RSA in the first place.

...

> At present, these devices use the highest performance modular-math 
> ASICs available and can just about keep up with current web 
> applications' transaction rates. Make the modular math an order of 
> magnitude slower and suddenly you will find you can't put these 
> devices in front of some applications at all.

One solution would be to use 2048-bit 4-prime RSA. It would maintain the
security of RSA-2048, enable the reusing of the modular arithmetic units
of 1024 bit VLSI chips and keep ECM factoring at bay. The added cost
would only be a factor of ~2, instead of ~8.

Best regards,
Samuel Neves

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread James Muir
On 10-09-30 11:41 AM, Thor Lancelot Simon wrote:
> On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:
>> Thor Lancelot Simon writes:
>>
>>> a significant net loss of security, since the huge increase in computation
>>> required will delay or prevent the deployment of "SSL everywhere".
>>
>> That would only happen if we (as security experts) allowed web developers to
>> believe that the speed of RSA is the limiting factor for web application
>> performance.
> 
> At 1024 bits, it is not.  But you are looking at a factor of *9* increase
> in computational cost when you go immediately to 2048 bits.  At that point,
> the bottleneck for many applications shifts, particularly those which are
> served by offload engines specifically to move the bottleneck so it's not
> RSA in the first place.

It sounds like a good time to switch to 224-bit ECC.  You could even use
256-bit ECC, which is comparable to 3072-bit RSA (according to the table
on page 5 of the SEC 2 document).

-James



signature.asc
Description: OpenPGP digital signature


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Thor Lancelot Simon
On Thu, Sep 30, 2010 at 01:36:47PM -0400, Paul Wouters wrote:
[I wrote]:
>> Also, consider devices such as deep-inspection firewalls or application
>> traffic managers which must by their nature offload SSL processing in
>> order to inspect and possibly modify data
>
> You mean it will be harder for MITM attacks on SSL. Isn't that a good thing? 
> :P

No, I don't mean that, because if the administrator of site _X_ decides
to do SSL processing on a front-end device instead of on the HTTP servers,
for whatever reason, that is simply not a MITM attack.

To characterize it as one is basically obfuscatory.

When I talk about "SSL everywhere" being an immediate opportunity, I mean
that, from my point of view, it looks like there's a growing realization
that _for current key sizes and server workloads_, for many high transaction
rate sites like Gmail, using SSL is basically free -- so you might as well,
and we all end up better off.

Mutliplying the cost of the SSL session negotiation by a small factor will
change that for a few sites, but multiplying it by a factor somewhere from
8 to 11 (depending on different measurements posted here in previous
discussions) will change it for a lot more.

That's very unfortunate, from my point of view, because I believe it is
a much greater net good to have most or almost all HTTP traffic encrypted
than it is for individual websites to have keys that expire in 3 years,
but are resistant to factoring for 20 years.

The balance is just plain different for end keys and CA keys.  A
one-size-fits-all approach using the key length appropriate for the CA
will hinder universal deployment of SSL/TLS at the end sites.  That is
not a good thing.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Paul Wouters

On Thu, 30 Sep 2010, Thor Lancelot Simon wrote:


That would only happen if we (as security experts) allowed web developers to
believe that the speed of RSA is the limiting factor for web application
performance.


At 1024 bits, it is not.  But you are looking at a factor of *9* increase
in computational cost when you go immediately to 2048 bits.  At that point,
the bottleneck for many applications shifts, particularly those which are
served by offload engines specifically to move the bottleneck so it's not
RSA in the first place.


I'm sure its nothing compared to the 3 layers of url shorter redirects and
their latency :P


Also, consider devices such as deep-inspection firewalls or application
traffic managers which must by their nature offload SSL processing in
order to inspect and possibly modify data


You mean it will be harder for MITM attacks on SSL. Isn't that a good thing? :P


This too will hinder the deployment of "SSL everywhere", and handwaving
about how for some particular application, the bottleneck won't be at
the front-end server even if it is an order of magnitude slower for it
to do the RSA operation itself will not make that problem go away.


The SSL everywhere problem has been a political one, not a technical one.
I am sure the "free market" can deal with putting SSL everywhere, if that
expectation has come from every internet user - instead of that internet
user clicking away many warnings about self signed certs, redirects and
SSL man-in-the-middle "protection".

Paul

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Marsh Ray

On 09/30/2010 10:41 AM, Thor Lancelot Simon wrote:

On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:


Thor Lancelot Simon writes:

a significant net loss of security, since the huge increase in computation
required will delay or prevent the deployment of "SSL everywhere".


That would only happen if we (as security experts) allowed web developers to
believe that the speed of RSA is the limiting factor for web application
performance.


+1.

Why are multi-core GHz server-oriented CPUs providing hardware 
acceleration for AES rather than RSA?


There may be reasons: AES side channels, patents, marketing, etc..

But if it really were such a big limitation you'd think it'd be a 
feature to sell server chips by now. Maybe in a sense it already is. 
What else are you going to do on that sixth core you stick behind the 
same shared main memory bus?



At 1024 bits, it is not.  But you are looking at a factor of *9* increase
in computational cost when you go immediately to 2048 bits.  At that point,
the bottleneck for many applications shifts, particularly those which are
served by offload engines specifically to move the bottleneck so it's not
RSA in the first place.


I could be wrong, but I get the sense that there's not really a high 
proportion of sites which are:


A. currently running within an order of magnitude of maxing out server 
CPU utilization on 1024 bit RSA, and


B. using session resumption to its fullest (eliminates RSA when it can 
be used), and


C. an upgrade to raw CPU power would represent a big problem for their 
budget.


OTOH, if it increased the latency and/or power consumption for 
battery-powered mobile client devices that could be noticeable for a lot 
of people.



Also, consider devices such as deep-inspection firewalls or application
traffic managers which must by their nature offload SSL processing in
order to inspect and possibly modify data before application servers see
it.  The inspection or modification function often does not parallelize
nearly as well as the web application logic itself, and so it is often
not practical to handle it in a distributed way and "just add more CPU".


The unwrapping of the SSL should parallelize just fine. I think the IT 
term for that is "scalability". We should be so lucky that all our 
problems could be solved by throwing more silicon at them!


Well, if there are higher-layer inspection methods (say virus scanning) 
which don't parallelize, well, wouldn't they have the same issue without 
encryption?



At present, these devices use the highest performance modular-math ASICs
available and can just about keep up with current web applications'
transaction rates.  Make the modular math an order of magnitude slower
and suddenly you will find you can't put these devices in front of some
applications at all.


Or the vendors get to sell a whole new generation of boxes again.


This too will hinder the deployment of "SSL everywhere",


It doesn't bother me the least if deployment of dragnet-scale 
interception-friendly SSL is hindered. But you may be right that it has 
some kind of effect on overall adoption.



and handwaving
about how for some particular application, the bottleneck won't be at
the front-end server even if it is an order of magnitude slower for it
to do the RSA operation itself will not make that problem go away.


Most sites do run "some particular application". For them, it's either a 
problem, an annoyance, or not a noticeable at all. The question is what 
proportion of situations are going to be noticeably impacted.


I imagine increasing the per-handshake costs from, say, 40 core-ms to 
300 core-ms will have wildly varying effects depending on the system. It 
might not manifest as a linear increase of anything that people care to 
measure.


I agree, it does sound a bit hand-wavy though. :-)

- Marsh

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Steven Bellovin

On Sep 30, 2010, at 11:41 18AM, Thor Lancelot Simon wrote:

> On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:
>> Thor Lancelot Simon writes:
>> 
>>> a significant net loss of security, since the huge increase in computation
>>> required will delay or prevent the deployment of "SSL everywhere".
>> 
>> That would only happen if we (as security experts) allowed web developers to
>> believe that the speed of RSA is the limiting factor for web application
>> performance.
> 
> At 1024 bits, it is not.  But you are looking at a factor of *9* increase
> in computational cost when you go immediately to 2048 bits.  At that point,
> the bottleneck for many applications shifts, particularly those which are
> served by offload engines specifically to move the bottleneck so it's not
> RSA in the first place.
> 
> Also, consider devices such as deep-inspection firewalls or application
> traffic managers which must by their nature offload SSL processing in
> order to inspect and possibly modify data before application servers see 
> it.  The inspection or modification function often does not parallelize
> nearly as well as the web application logic itself, and so it is often
> not practical to handle it in a distributed way and "just add more CPU".
> 
> At present, these devices use the highest performance modular-math ASICs
> available and can just about keep up with current web applications'
> transaction rates.  Make the modular math an order of magnitude slower
> and suddenly you will find you can't put these devices in front of some
> applications at all.
> 
> This too will hinder the deployment of "SSL everywhere", and handwaving
> about how for some particular application, the bottleneck won't be at
> the front-end server even if it is an order of magnitude slower for it
> to do the RSA operation itself will not make that problem go away.
> 
While I'm not convinced you're correct, I think that many posters here
underestimate the total cost of SSL.  A friend of mine -- a very competent
friend -- was working on a design for a somewhat sensitive website.  He
really wanted to use SSL -- but the *system* would have cost at least 12x
as much.  There were many issues, but one of them is that the average dwell
time on a web site is very few pages, which means that you have to amortize
the cost of the SSL negotiation over very little actual activity.  
> 


--Steve Bellovin, http://www.cs.columbia.edu/~smb





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Thor Lancelot Simon
On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:
> Thor Lancelot Simon writes:
> 
> > a significant net loss of security, since the huge increase in computation
> > required will delay or prevent the deployment of "SSL everywhere".
> 
> That would only happen if we (as security experts) allowed web developers to
> believe that the speed of RSA is the limiting factor for web application
> performance.

At 1024 bits, it is not.  But you are looking at a factor of *9* increase
in computational cost when you go immediately to 2048 bits.  At that point,
the bottleneck for many applications shifts, particularly those which are
served by offload engines specifically to move the bottleneck so it's not
RSA in the first place.

Also, consider devices such as deep-inspection firewalls or application
traffic managers which must by their nature offload SSL processing in
order to inspect and possibly modify data before application servers see 
it.  The inspection or modification function often does not parallelize
nearly as well as the web application logic itself, and so it is often
not practical to handle it in a distributed way and "just add more CPU".

At present, these devices use the highest performance modular-math ASICs
available and can just about keep up with current web applications'
transaction rates.  Make the modular math an order of magnitude slower
and suddenly you will find you can't put these devices in front of some
applications at all.

This too will hinder the deployment of "SSL everywhere", and handwaving
about how for some particular application, the bottleneck won't be at
the front-end server even if it is an order of magnitude slower for it
to do the RSA operation itself will not make that problem go away.

Thor

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Kevin W. Wall
Thor Lancelot Simon wrote:
> See below, which includes a handy pointer to the Microsoft and Mozilla
> policy statements "requiring" CAs to cease signing anything shorter than
> 2048 bits.
<...snip...>
> These certificates (the end-site ones) have lifetimes of about 3 years
> maximum.  Who here thinks 1280 bit keys will be factored by 2014?  *Sigh*.

No one that I know of (unless the NSA folks are hiding their quantum computers
from us :). But you can blame this one on NIST, not Microsoft or Mozilla.
They are pushing the CAs to make this happen and I think 2014 is one of
the important cutoff dates, such as the date that the CAs have to stop
issuing certs with 1024-bit keys.

I can dig up the NIST URL once I get back to work, assuming anyone actually
cares.

-kevin
-- 
Kevin W. Wall
"The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents."-- Nathaniel Borenstein, co-creator of MIME

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-30 Thread Chris Palmer
Thor Lancelot Simon writes:

> a significant net loss of security, since the huge increase in computation
> required will delay or prevent the deployment of "SSL everywhere".

That would only happen if we (as security experts) allowed web developers to
believe that the speed of RSA is the limiting factor for web application
performance.

That would only happen if we did not understand how web applications work.

Thankfully, we do understand how web applications work, and we therefore
advise our colleagues and clients in a way that takes the whole problem
space of web application security/performance/availability into account.

Sure, 2048 is overkill. But our most pressing problems are much bigger and
very different. The biggest security problem, usability, rarely involves any
math beyond rudimentary statistics...


-- 
http://noncombatant.org/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-09-29 Thread Thor Lancelot Simon
See below, which includes a handy pointer to the Microsoft and Mozilla
policy statements "requiring" CAs to cease signing anything shorter than
2048 bits.

As I think I said last week -- was it last week? -- it's my belief that
cutting everything on the Web over to 2048 bits rather than, say, 1280
or 1536 bits in the near term will be a significant net loss of security,
since the huge increase in computation required will delay or prevent the
deployment of "SSL everywhere".

These certificates (the end-site ones) have lifetimes of about 3 years
maximum.  Who here thinks 1280 bit keys will be factored by 2014?  *Sigh*.

- Forwarded message from Rob Stradling via RT  -

Lines: 327
Return-Path: owner-openssl-...@openssl.org
X-Original-To: t...@panix.com
Received: from mail1.panix.com (mail1.panix.com [166.84.1.72])
by mailbackend.panix.com (Postfix) with ESMTP id B4B4031A88
for ; Wed, 29 Sep 2010 15:54:48 -0400 (EDT)
Received: from master.openssl.org (master.openssl.org [195.30.6.166])
by mail1.panix.com (Postfix) with ESMTP id 2E38A1F094
for ; Wed, 29 Sep 2010 15:54:48 -0400 (EDT)
Received: by master.openssl.org (Postfix)
id 428621EAE8D5; Wed, 29 Sep 2010 21:54:16 +0200 (CEST)
Received: by master.openssl.org (Postfix, from userid 29101)
id 40DB41EAE8D4; Wed, 29 Sep 2010 21:54:16 +0200 (CEST)
Received: by master.openssl.org (Postfix, from userid 29101)
id EE8551EAE8D2; Wed, 29 Sep 2010 21:54:15 +0200 (CEST)
Subject: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits
From: Rob Stradling via RT 
In-Reply-To: <201009291252.23829.rob.stradl...@comodo.com>
References: 
<201009291252.23829.rob.stradl...@comodo.com>
Message-ID: 
X-RT-Loop-Prevention: openssl.org
RT-Ticket: openssl.org #2354
Managed-by: RT 3.4.5 (http://www.bestpractical.com/rt/)
RT-Originator: rob.stradl...@comodo.com
Cc: openssl-...@openssl.org
MIME-Version: 1.0
X-RT-Original-Encoding: utf-8
Content-type: multipart/mixed; boundary="--=_1285790055-45870-1"
Date: Wed, 29 Sep 2010 21:54:15 +0200 (CEST)
Sender: owner-openssl-...@openssl.org
Precedence: bulk
Reply-To: openssl-...@openssl.org
X-Sender: "Rob Stradling via RT" 
X-List-Manager: OpenSSL Majordomo [version 1.94.5]
X-List-Name: openssl-dev
X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.1.7

This is a multi-part message in MIME format...

=_1285790055-45870-1
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

NIST (SP800-57 Part 1) recommends a minimum RSA key size of 2048-bits beyond 
2010.  From January 1st 2011, in order to comply with the current Microsoft[1] 
and Mozilla[2] CA Policies, Commercial CAs will no longer be permitted to 
issue certificates with RSA key sizes of <2048-bit.

Please accept the attached patch, which increases the default RSA key size to 
2048-bits for the "req", "genrsa" and "genpkey" apps.

Thanks.

[1] http://technet.microsoft.com/en-us/library/cc751157.aspx says:
"we have advised Certificate Authorities...to transition their subordinate and 
end-certificates to 2048-bit RSA certificates, and to complete this transition 
for any root certificate distributed by the Program no later than December 31, 
2010".

[2] https://wiki.mozilla.org/CA:MD5and1024 says:
"December 31, 2010 ??? CAs should stop issuing intermediate and end-entity 
certificates from roots with RSA key sizes smaller than 2048 bits. All CAs 
should stop issuing intermediate and end-entity certificates with RSA key size 
smaller than 2048 bits under any root".

Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
Office Tel: +44.(0)1274.730505
Office Fax: +44.(0)1274.730909
www.comodo.com

COMODO CA Limited, Registered in England No. 04058690
Registered Office:
  3rd Floor, 26 Office Village, Exchange Quay,
  Trafford Road, Salford, Manchester M5 3EQ

This e-mail and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender by replying
to the e-mail containing this attachment. Replies to this email may be
monitored by Comodo for operational or business reasons. Whilst every
endeavour is taken to ensure that e-mails are free from viruses, no liability
can be accepted and the recipient is requested to use their own virus checking
software.


=_1285790055-45870-1
Content-Type: text/x-patch; charset="utf-8"; name="default_2048bit_rsa.patch"
Content-Disposition: inline; filename="default_2048bit_rsa.patch"
Content-Transfer-Encoding: 7bit
RT-Attachment: 2354/28329/14216

Index: apps/genrsa.c
===
RCS file: /v/openssl/cvs/openssl/apps/genrsa.c,v
retrieving revision 1.40
diff -U 5 -r1.40 genrsa.c
--- apps/genrsa.c   1 Mar 2010 14:22:21 -   1.40
+++ apps/genrsa.c   28 Sep 2010 14:44:44 -000