Curious about how I should be benchmarking http://fluidengage.org. Is
there a particular url to test? I figure
<http://fluidengage.org/engage/catalogue/browse.html?db=mccord_exhibitions&exhibitID=4&lang=en>
would be best since it is the largest and most database intensive view of the
data? That page is around 120kb of html, not counting images which come from
elsewhere.

Using a very non-scientific simulated load with apache's ab tool, performance is
modest across concurrency levels. Each set of requests was run 5 times and
averaged to generate these numbers. Time per request is the measure of how long
it took for each request thread to complete e.g. average of how long for each
thread out of 10 concurrency threads with 100 requests each is 610ms (below).

Request duration is how long it took for 95% of all requests to be completed,
e.g. -c50 -n100 would be 5000 requests. For the most part, the last 5% of
requests did not take significantly longer than the first 95% (which is good).
Typically as loads scale, the last 5% of requests can vary enormously, where 
that
last 1% of requests can take 5-10s or time out completely. That would be
intolerable for a user on a webpage, so the fact that there is no significant
drop off for the last few requests is a good thing indeed.

There were some wild outliers that were removed from my sample since these were
run on the live site. Ideally testing would go beyond -c50 -n100 but things are
unstable beyond a concurrency of 10 and the app falls over and requires a tomcat
restart. Different containers might fare better or worse, I'm not sure.

There was a cronjob that ran during the -c50 -n50 tests, and one outlier was
removed (so the sample is only 4 runs).

Without further ado, some basic statistics follow:

Concurrency:    Requests/s:     Time per request(ms):   Request Duration:
-c1 -n1         8.94            112.78                  147.20  
SDTDEV:         0.83            11.7                    48.89

-c10 -n10       14.04           721.29                  721
STDDEV          1.65            98.77                   98.77

-c10 -n100      16.44           610.66                  903.6
STDDEV:         1.15            45.3                    94.38

-c10 -n1000     17.2            581.43                  885.2
STDDEV:         0.23            7.77                    41.88

-c50 -n50       11.88           4270.13                 4255.5
STDDEV:         1.69            583.9                   588.93

-c50 -n100      12.22           4141.33                 6580.4
STDDEV:         1.42            552.9                   1634.76

Note that there hasn't been any JVM or CouchDB tuning done. I'm not good at
statistics, so if I've missed anything or done something completely wrong, I'd
like to hear about it..

TLDR:
Requests per second will likely never reach even 10 with the expected user base
I'd imagine, so things are probably just fine as they are. But for many sites
that I've worked on, that number can be upwards of 200/s with little variation
in the individual and cumulative request times, so there is plenty of room for
improvement :)

Regards, Jamon

Attachment: binU6zMNWHXnO.bin
Description: PGP Key 0xF440D617.

Attachment: signature.asc
Description: Digital signature

_______________________________________________________
fluid-work mailing list - [email protected]
To unsubscribe, change settings or access archives,
see http://fluidproject.org/mailman/listinfo/fluid-work

Reply via email to