>Ann & Lynn Wheeler wrote: 

> the original requirement for SSL deployment was that it was on from the
> original URL entered by the user. The drop-back to using SSL for only small
> subset ... was based on computational load caused by SSL cryptography .... in
> the online merchant scenario, it cut thruput by 90-95%; alternative to handle
> the online merchant scenario for total user interaction would have required
> increasing the number of servers by factor of 10-20.
> 
> One possibility is that the institution has increased the server capacity ...
> and/or added specific hardware to handle the cryptographic load.

Moore's law helped immensely here. In the last 5 years systems have gotten 
about 8 times faster, reducing the processing cost of crypto a lot. I'm 
familiar with one site that has 24 servers evenly divided across three 
geographical areas. To entirely SSL-enable their site required only one new 
server at each site. Meanwhile, load-balancing SSL terminator/accelerators have 
improved even faster due to improvements in load-balancing, network 
compression, etc. So putting one of them in front of a previously naïve 
load-balancing scheme, like basic round-robin would provide enough offloading 
to SSL-enable an entire site.

The big drawback is that those who want to follow NIST's recommendations to 
migrate to 2048-bit keys will be returning to the 2005-era overhead. Dan 
Kaminsky provided some benchmarks in a different thread on this list [1] that 
showed 2048-bit keys performing at 1/9th of 1024-bit. My own internal 
benchmarks have been closer to 1/7th to 1/8th. Either way, that's back in line 
with the above stated 90-95% overhead. Meaning, in Dan's words "2048 ain't 
happening." 

There are some possibilities, my co-workers and I have discussed. For purely 
internal systems TLS-PSK (RFC 4279) provides symmetric encryption through 
pre-shared keys which provides us with whitelisting as well as removing 
asymmetric crypto. Or, possibly stepping up the key-size in accordance with 
Moore's law, which would take several years to reach 2048-bit, but each time a 
certificate expired the new certificate could be issued with the next higher 
keylength.

Besides, I think we all know that NIST's 2010 algorithm transaction isn't going 
to happen on schedule. At the IEEE key management summit back in May (IIRC) 
Elaine Barker from NIST presented a back-off talk in which NIST only "strongly 
recommends" 110-bit security by 2011, and pushes the real deadline out to the 
end of 2013. That was subsequently released as a draft SP800-131; available for 
comments [2]

Of course, the industry had five years to plan for this, and no major vendors 
seem to be ready. Most comments are essentially vendors sighing with relief.

[1] http://www.mail-archive.com/cryptography@metzdowd.com/msg11245.html
[2] 
http://csrc.nist.gov/publications/drafts/800-131/draft-sp800-131_spd-june2010.pdf

Eric Lengvenis
InfoSec Arch.

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

Reply via email to