Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-01 Thread Victor Duchovni
On Thu, Sep 30, 2010 at 01:32:38PM -0400, Thor Lancelot Simon wrote:

> On Thu, Sep 30, 2010 at 05:18:56PM +0100, Samuel Neves wrote:
> > 
> > One solution would be to use 2048-bit 4-prime RSA. It would maintain the
> > security of RSA-2048, enable the reusing of the modular arithmetic units
> > of 1024 bit VLSI chips and keep ECM factoring at bay. The added cost
> > would only be a factor of ~2, instead of ~8.
> 
> This is a neat idea!  But it means changing the TLS standard, yes?

Presumably, this would only speed-up private-key operations. Public-key
operations (which is all one sees on the wire) should be the same whether
there are 2 or 4 unknown factors, one just uses the 2048-bit modulus.

Even the signing CA would not know how many primes were used to construct
the public key, provided software implementations supported 4-prime
private keys, I would naively expect the everyone else to not see any
difference.

Should we be confident that 4-prime RSA is stronger at 2048 bits than
2-prime is at 1024? At the very least, it is not stronger against ECM
(yes ECM is not effective at this factor size) and while GNFS is not
known to benefit from small factors, is this enough evidence that
4-prime 2048-bit keys are effective?

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-01 Thread Chris Palmer
Thor Lancelot Simon writes:

> > believe that the speed of RSA is the limiting factor for web application
> 
> At 1024 bits, it is not.  But you are looking at a factor of *9* increase
> in computational cost when you go immediately to 2048 bits.

In my quantitative, non-hand-waving, repeated experience with many clients in
many business sectors using a wide array of web application technology
stacks, almost all web apps suffer a network and disk I/O bloat factor of 5,
10, 20, ...

There are these sites where page-loads incur the creation of 30 TCP
connections. Pages have 20 tiny PNGs for navigation elements, all served
over non-persistent HTTP connections with the Cache-Control: header set to
no-cache. Each page view incurs a re-load of these static images. Take a
look at those images: why are they 35KB each? Oh, they have unoptimized
color palettes and 500 bytes of worthless comments and header junk and
actually they are twice as large as they appear on screen (the developer
shrinks them on the page with height= and width= attributes). To speed up
page loads, they serve the images from 10 distinct hostnames (to trick the
browser into parallelizing the downloads more). "What's spriting?"

How long does it take the browser to compile your 500KB of JavaScript? To
run it?

Compression is not turned on. The database is choked. The web is a front-end
for an oversubscribed and badly-implemented SOAP service. (I've seen backend
messaging services where the smallest message type was 200KB.) The 80KB
JavaScript file contains 40KB of redundant whitespace and is
dynamically-generated and so uncacheable. (I usually find a few XSS bugs
while I'm at it --- good luck properly escaping user data in the context of
arbitrary JavaScript code, but never mind that...)

The .NET ViewState field and/or the cookies are huge, like 20KB (I've seen
100KB) of serialized object state. It seems fine in the office, but from
home on my asymmetric cable line, performance blows --- it takes too long to
get the huge requests to the server! And yeah, your 20 PNGs are in the same
domain as your cookie, so that huge cookie goes up on every request. Oops...

I'm sure Steven's friend is competent. A competent web developer, or a
competent network architect? I have indeed seen this 12x cost factor before.
Every single time, it was a case where nobody knew the whole story of how
the app works. (Layering and encapsulation are good for software designs,
but bad for people.) Every single time, there were obvious and blatant ways
to improve page-load latency and/or transaction throughput by a factor of 9
or 12 or more. It translates directly into dollars: lower infrastructure
costs, higher conversion rates. Suddenly SSL is free.

I'm still fully with you; it's just that of all the 9x pessimalities, the
I/O ones matter way more.

Recommended reading:

http://oreilly.com/catalog/9780596529307

http://gmailblog.blogspot.com/2008/05/need-for-speed-path-to-faster-loading.html

"""...a popular network news site's home page required about a 180 requests
to fully load... [but for Gmail] it now takes as few as four requests from
the click of the "Sign in" button to the display of your inbox"""

Performance is a security concern, not just for DoS reasons but because you
have to be able to walk the walk to convince people that your security
mechanism will work.


The concern about the impact of 2048-bit RSA on low-power devices is
well-placed. But there too, content-layer concerns dominate overall, perhaps
even moreso.

Again, I'm not waving hands: I've measured. You can measure too, the tools
are free.


-- 
http://noncombatant.org/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: [tt] Random numbers created out of nothing

2010-10-01 Thread Eugen Leitl
On Thu, Sep 30, 2010 at 11:23:39PM -0400, Jerry Leichter wrote:
> On Sep 30, 2010, at 9:24 AM, Eugen Leitl wrote:
>> Right from the snake-oil-security-dept.
> Really?  Just what about it is snake oil?  Quantum vacuum fluctuations  

That QM RNGs are special in comparison to other RNGs, which have been
shipping in mainstream systems (both chipsets and CPUs) for many years.

> are real, they're random in as strong a sense as anything we have access 
> to.  True random numbers are a fundamental primitive.  They're talking 
> 200 Mb/second for 100 Euros.  That's not bad, if they can get there.

How about GBit/s, for free?

http://spectrum.ieee.org/computing/hardware/intel-makes-a-digital-coin-tosser-for-future-processors

> What in the claims here is false, misleading, solves a problem for which 

The claim that QM RNGs produce qualitatively different
entropy than dozens of existing, deployed hardware RNGs.

> there are better proven solutions, or in any other way deserves to be 
> dismissed?  (OK, the headline is a bit over the top - but that's not the 
> fault of the researchers)

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


RE: 'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps

2010-10-01 Thread Brad Hill
Kevin W. Wall wrote:
> isn't the pre-shared key version of W3C's XML Encrypt also going to be 
> vulnerable 
> to a padding oracle attack.

Any implementation that returns distinguishable error conditions for invalid 
padding is vulnerable, XML encryption no more or less so if used in such a 
manner.  But XML encryption in particular seems much less likely to be used 
in this manner than other encryption code.

The primary use case you cite for PSK, an asynchronous message bus, is 
significantly less likely to return oracular information to an attacker than a
synchronous service.  And due to the rather unfavorable performance of
XML encryption, in practice it is rarely used for synchronous messages.  
Confidentiality for web service calls is typically provided for at the transport
layer rather than the message layer.  SAML tokens used in redirect-based
sign-on protocols are the only common use of XML encryption I'm aware 
of where the recipient might provide a padding oracle, but these messages
are always signed as well.

Brad Hill

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com


Re: 2048 bits, damn the electrons! [...@openssl.org: [openssl.org #2354] [PATCH] Increase Default RSA Key Size to 2048-bits]

2010-10-01 Thread Samuel Neves
 On 01-10-2010 02:41, Victor Duchovni wrote:
> Should we be confident that 4-prime RSA is stronger at 2048 bits than
> 2-prime is at 1024? At the very least, it is not stronger against ECM
> (yes ECM is not effective at this factor size) and while GNFS is not
> known to benefit from small factors, is this enough evidence that
> 4-prime 2048-bit keys are effective?
>

It is slightly stronger than RSA-1024 against ECM, since ECM is then
performed modulo a 2048 bit value instead of a 1024 bit one. This slows
down arithmetic by a factor between 3 and 4 (Karatsuba vs Schoolbook
multiplication). Further, there are now 3 factors to find by ECM instead
of just 1.

Going by asymptotic complexities, factoring 4-prime RSA-2048 by NFS
should cost around 2^116 operations. Using ECM to find a 512-bit prime
costs around 2^93 elliptic curve group additions (add arithmetic cost
here). Factoring RSA-1024 by NFS costs around 2^80 operations.

Thus, I believe that 4-prime RSA-2048 is slightly easier than 2-prime
RSA-2048, but still significantly harder than RSA-1024.

Best regards,
Samuel Neves

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com