I've probably not given out very good instructions on how to do this, except
on internal Wiki's.
On the M2 branch (and possibly on trunk too, though I have not checked for a
long time).
cd to perftests.
> mvn install
> uk.co.thebadgerset:junit-toolkit-maven-plugin:tkscriptgenassembly:assembly
This produces a Zip file (and a tar.gz too), that ends in -all-test-deps.zipor -
all-tests-deps.tar.gz. You could also build with assembly:directory too, and
it creates a directory with the zip file contents in it, under target, name
ending in -all-tests-deps.dir.
Unpack this somewhere convenient. Due to Maven problems I have not got
around to fixing you may have to rename the
junit-toolkit-0.6-20070802.100404-14.jar (or equivalent) to
junit-toolkit-0.6-SNAPSHOT.jar.
Start a broker (configure it for a persistent message store if you want to
test persistent messaging).
Run some test scripts. For example Ping-Once.sh sends a single ping.
These scripts are all generated from parameters set up in the pom.xml in the
perftests directory. There is also a RunningPerformanceTests.txt readme,
that explains a lot of stuff.
As a quick example, the PQC-Qpid-01.sh test runs the following command:
> java -Xms256m -
Dlog4j.configuration=file:/c:/home/rupert/qpid/trunk/qpid/java/etc/mylog4j.xml-Xmx1024m
-
Dbadger.level=warn -Damqj.test.logging.level=info -Damqj.logging.level=warn-cp
qpid-perftests-1.0-incubating-M2-SNAPSHOT.jar
uk.co.thebadgerset.junit.extensions.TKTestRunner -n PQC-Qpid-01 -d1M
-s[1000] -c[1,30],samples=30 -o $QPID_WORK/results -t testAsyncPingOk
org.apache.qpid.ping.PingAsyncTestPerf persistent=true pubsub=false
transacted=true commitBatchSize=100 batchSize=1000 messageSize=256
destinationCount=1 rate=600 maxPending=1000000
This runs a P2P test, ramping up from 1 to 30 threads in 1 minute intervals,
each thread limited to 600 msgs/sec, over one queue per thread, committing
mesages in batches of 100, outputing timings for every 1000 msgs, using 256
byte messages, with a safety limit of 10000000 bytes of un-received messages
on the broker at once. The purpose of which is to produce a nice graph that
shows when the saturation point is reached and how the broker handles itself
at higher demands.
When calling the script you can add more name=value options, or parameters
and they will override the settings of the script. So to do the same test
but using 10 minute intervals for each step-up, and 1000 byte messages do:
PQC-Qpid-01.sh -d10M messageSize=1000
The results of the tests are output into .csv files. Suggest you collate
them by doing:
find . -name '*.csv' -exec grep 'Total Tests:' {} \; >> summary.csv
The 'size' parameter is the number of messages sent per test. So if the test
throughput is 20 tests per second, and the size is 1000, you need to
multiply to get 20000 msgs/sec. Also, before you get really worried, timings
are in milliseconds, which is ridiculous I know. I really should change it
to seconds.
Load the .csv file in a spreadsheet and make some pretty graphs.
I have not done it yet, but you should be able to run the Java client
through the C++ broker and compare timings with the Java broker.
All of these tests send and recieve on a single client machine. Hence they
are limited in their ability to pretend to be lots of clients. Also, the
pub/sub tests are pretty useless. They do a reasonable job of testing P2P
though.
There is also now a distributed testing framework, extending the interop
test stuff. Still a bit to do to complete it. Basically it will let you do
tests like above, but distributed across many nodes. It will also offer a
more comprehensive set of name=value parameters for testing more of the
protocol. Expect code for this in the weeks ahead, as well as an updated
interop/distributed testing spec, with all the gory details. The nice thing
about this, is that Gordon suggested making the interop test clients dumb,
and doing as much work as possible in the coordinator. The result is that
the existing interop clients will not have to be made much more complicated
to handle these new tests. The coordinator has been extensively
re-engineered.
Rupert