Hi there,

Before I reply - why the cross-posting? There's been a lot of cross-posting
between mod_ssl-users and openssl-users - are there good reasons for it? I can
only assume that subjects fit for both lists at the same time probably involve
people who are on both lists anyway ...

On Wed, 14 Feb 2001, Ben Laurie wrote:

> [EMAIL PROTECTED] wrote:
> > 
> > Thanks Ben for cheering me up. Perhaps If I have a machine that can change
> > it's IP number constantly I could get round it. Or perhaps not. Maybe I
> > could disable session caching altogether. This is only a development machine
> > anyway (and has been trashed many times).
> 
> That wasn't exactly what I meant: in a live server you do less RSA and
> more symmetric because of session caching.

Yup, but if I might go so far as to cheer John up even less (<grin>), you also
need to take a close look at your expected usage. Eg. Let's imagine you run a
banking site, with session caching and an expectation that a large percentage of
your hits will come from logged-in, session-resuming users - ie. you don't have
millions of users hitting your SSL site randomly one time each, but typically
have thousands of users hitting your SSL site and staying "involved". You might
then calculate an expected load, an expected profile, calculate a little
contingency into your capacity, and then away you go.

And probably every Monday morning at 9am your site will *drown*.

This is an example of one way to look at a problem, think you've nailed it, and
then get embarassingly thumped for your troubles. But there are more if you look
for them. Especially on something as fickle and unpredictable as the internet.
Ever heard of the "slashdot effect"?

The only safe way is to test all kinds of extreme cases - the ones designers
call "hypothetical" which of course translates to "shouldn't happen, but of
course *will* happen the moment we flick the switch". Eg. yes - turn off session
caching, and slam your server with a vast number of brand new session attempts
all at the same time, and try to simulate the numbers and usage you'd expect if,
miraculously, tonnes of your users decide to conspire to do this sort of thing.
Remember, such a "spike" can come from anywhere - the stock market may go
spiralling dow... oh wait, it has ... well, anyway - there could be any number
of reasons why your "expected profile" may just get turfed out the window -
reality has a strange sense of humour when it comes to forecasts and
expectancies. You can't rely in cases like this for users to organise themselves
to ensure they don't all ask for the same thing at the same time. Whatever it
might be that causes a user to want to hit your site at any given time may be a
perfectly good reason for the rest of them to do the same. Especially at 9am
Monday morning. :-)

Similarly, switch on session caching, get a few "users" (ie. simulated test
clients if need be) connected (ie. sessions negotiated), then see how well your
session caching and application concurrency holds up if those "few" (a relative
term of course, it could equal 20000 depending on who you are) users then slam
away at the server resuming sessions all the time. It's another kind of load -
one that's heavy on any system that has contention (eg. locking for shared
resources, such as a session cache, or more likely, something in the
web-server's application logic - if every request to your application servers
adds a log to the same shared table in an external SQL database then you may
find the whole system grinds to a crawl on the locking for that table).

Then there's "user friendly" testing you might consider - ie. in a "normal"
situation (where the system *is* finally operating at the loads and profiles you
predicted), what individual latencies do the clients experience? Many server
architectures use distribution to scale up (here, distribution covers some
things you may not normally call "distributed" - eg. using external databases,
nfs-mounted file systems, CGIs that do anything network related, LDAP lookups,
remote authentication, etc). This is usually to spread the work around more than
one system to increase scalability and redundancy - but it usually takes a
penalty of some kind in the latency of each individual request. A server
architecture that can maintain your largest possible throughput and have cycles
left to burn is completely useless if each client's request takes 10 seconds to
return and 2 seconds is the longest you (or your client) is prepared to accept.

Just trying to help. :-)

Cheers,
Geoff



______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    [EMAIL PROTECTED]
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to