Hi there,

On Wed, 4 Apr 2001 [EMAIL PROTECTED] wrote:

> Thanks for this utility, it works well for me in testing SSL acceleration
> cards (which I still haven't completed. What a slacker). In fact, this
> program works so well, I could give you a big sloppy kiss, but I'll refrain
> from doing so. However, if you were female...

Thanks, that won't be necessary. :-) Of course, if you spot any problems, or
just feel like improving the code or docs (the latter being especially easy to
do), that would be very welcome.

> Just to show how dense I can be, does the setting for users and requests
> mean that it simulates x users making y requests, ie xy (or x*y depending on
> your notation preferences) requests?

umm ... users? "num" refers to concurrency if that's what you meant. IIRC, the
requests limit is total requests, independant of what level of concurrency
you're using. (Ie. 1000 requests is absolute, whether they're done one after
another or 10 at a time). However, I cared less about that stuff when I was
developing it - the most useful test, for me at least, is leaving it running
indefintely. The "-updates" and "-csv" switches give me the stats, and by
leaving it going I can do things like starting up multiple copies, including
possibly using multiple test client machines (and/or the same machine but going
through different network interfaces, etc). It's also useful of course to script
it so all the tests start simultaneously, but by leaving it going you can be
sure you get the stats you require before terminating. Then you suck in all
those "csv" files into Excel or something and start drawing pretty graphs.

> Thanks again. I'll be putting my results of the Rainbow card on the list
> soon (provided I can get the kernel module to compile for an SMP kernel).

OK. It's also useful BTW for thrashing away and profiling session cache
characteristics. If you set a shmht cache size and timeout such that a full
"swamping" can fill the cache slightly under the expiry time, and you use the
"srrsr" session sequence (ie. new session, resume, resume, new session, resume,
etc), you should notice that performance isn't consistent - moreover, every
"expiry" seconds the performance will take a nose-dive before picking up. You
will also notice failed session resumes coming in little bursts (these also
correspond with an obvious slow-down, every request is forced to negotiate a new
session). If you try the same with "shmcb" hopefully you'll notice that it's
generally slightly faster most of the time, but doesn't have those 'down'
moments nor should it fail resumes. It's recommended you use some concurrency
for this sort of testing though ... between 10 and 20 is a good guide.

NB: to work out the cache-size/timeout required to fill a cache just under the
expiry time - just run swamp with the "srrsr" sequence and measure the average
speed. A typical session (if you use a cipher suite like RC4-SHA and no client
cert) is around 130 bytes IIRC, so you can devide the cache size by that to work
out how many sessions it can hold (and lower the number slightly to cover minor
overheads, byte-alignment fragments, etc). Given 2/5 of the requests (srrsr)
create new sessions, you can work out how many sessions are attempting to store
themselves in the cache per second/minute and adjust the timeout or size
appropriately. You should find that when the cache fills, but before the expiry
timeout comes round, you will start to get failed session resumes. That expiry
round happens approximately (n * 'expiry') seconds after the very first access
to the cache BTW (for n=1,2,3...) - so dont preview the site using a browser 1
minute before you start the "swamp"ing - it'll still misbehave but at a
different (and weirder) time to what you expect. Ie. stop the server, start it,
then launch your test.

Cheers,
Geoff


______________________________________________________________________
Apache Interface to OpenSSL (mod_ssl)                   www.modssl.org
User Support Mailing List                      [EMAIL PROTECTED]
Automated List Manager                            [EMAIL PROTECTED]

Reply via email to