Thor Lancelot Simon writes:

> > believe that the speed of RSA is the limiting factor for web application
> 
> At 1024 bits, it is not.  But you are looking at a factor of *9* increase
> in computational cost when you go immediately to 2048 bits.

In my quantitative, non-hand-waving, repeated experience with many clients in
many business sectors using a wide array of web application technology
stacks, almost all web apps suffer a network and disk I/O bloat factor of 5,
10, 20, ...

There are these sites where page-loads incur the creation of 30 TCP
connections. Pages have 20 tiny PNGs for navigation elements, all served
over non-persistent HTTP connections with the Cache-Control: header set to
no-cache. Each page view incurs a re-load of these static images. Take a
look at those images: why are they 35KB each? Oh, they have unoptimized
color palettes and 500 bytes of worthless comments and header junk and
actually they are twice as large as they appear on screen (the developer
shrinks them on the page with height= and width= attributes). To speed up
page loads, they serve the images from 10 distinct hostnames (to trick the
browser into parallelizing the downloads more). "What's spriting?"

How long does it take the browser to compile your 500KB of JavaScript? To
run it?

Compression is not turned on. The database is choked. The web is a front-end
for an oversubscribed and badly-implemented SOAP service. (I've seen backend
messaging services where the smallest message type was 200KB.) The 80KB
JavaScript file contains 40KB of redundant whitespace and is
dynamically-generated and so uncacheable. (I usually find a few XSS bugs
while I'm at it --- good luck properly escaping user data in the context of
arbitrary JavaScript code, but never mind that...)

The .NET ViewState field and/or the cookies are huge, like 20KB (I've seen
100KB) of serialized object state. It seems fine in the office, but from
home on my asymmetric cable line, performance blows --- it takes too long to
get the huge requests to the server! And yeah, your 20 PNGs are in the same
domain as your cookie, so that huge cookie goes up on every request. Oops...

I'm sure Steven's friend is competent. A competent web developer, or a
competent network architect? I have indeed seen this 12x cost factor before.
Every single time, it was a case where nobody knew the whole story of how
the app works. (Layering and encapsulation are good for software designs,
but bad for people.) Every single time, there were obvious and blatant ways
to improve page-load latency and/or transaction throughput by a factor of 9
or 12 or more. It translates directly into dollars: lower infrastructure
costs, higher conversion rates. Suddenly SSL is free.

I'm still fully with you; it's just that of all the 9x pessimalities, the
I/O ones matter way more.

Recommended reading:

http://oreilly.com/catalog/9780596529307

http://gmailblog.blogspot.com/2008/05/need-for-speed-path-to-faster-loading.html

"""...a popular network news site's home page required about a 180 requests
to fully load... [but for Gmail] it now takes as few as four requests from
the click of the "Sign in" button to the display of your inbox"""

Performance is a security concern, not just for DoS reasons but because you
have to be able to walk the walk to convince people that your security
mechanism will work.


The concern about the impact of 2048-bit RSA on low-power devices is
well-placed. But there too, content-layer concerns dominate overall, perhaps
even moreso.

Again, I'm not waving hands: I've measured. You can measure too, the tools
are free.


-- 
http://noncombatant.org/

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

Reply via email to