Hi Praveen,

I notice both your tests actually seem to enqueue and dequeue messages
at the same time (since you commit per publish and the message
listeners will already be recieving a message which then gets commited
by the next publish due to the single session in use, leaving a
message on the queue at the end), so you might not be getting the
precise number you are looking for in the first test, but that doesnt
really change the relative results it gives.

I didnt see quite the same disparity when I ran the tests on my box,
but the Derby store did still win significantly (giving ~2.3 vs 4.4ms
and 350 vs 600msg/s best cases), though there have been some changes
made on trunk since your runs to massively improve transient messaging
performance of the Java broker which may also have influenced things
here a little. Either way, although it makes the test suite runs take
significantly longer it would seem that in actual use the Derby store
is currently noticably faster in at least some use cases. As I have
said previously our attention to performance of the Java broker has
been lacking for a while, but we are going to spend some quality time
looking at performance testing very soon now, and given the recent
transient improvements will undoubtedly be looking at persistent
performance going forward as well.

Robbie

On 3 December 2011 00:45, Praveen M <[email protected]> wrote:
> Hi,
>
>    I've been trying to benchmark the BerkeleyDb against DerbyDb with the
> java broker to find which DB is more perform-ant against the java broker.
>
> I have heard from earlier discussing that berkeleydb runs faster in the
> scalability tests of Qpid. However, some of my tests showed the contrary.
>
> I had setup BDB using the "ant build release-bin -Dmodules.opt=bdbstore
> -Ddownload-bdb=true" as directed in Robbie's earlier email in a similar
> topic thread.
>
> I tried running two tests in particular which are of interest to me
>
> Test 1)
> Produce 1000 messages to the broker in transacted mode such that after every
> enqueue you commit the transaction.
>
> The time taken to enqueue a message in transacted mode from the above test
> is approx 5-8 ms for derbyDb and about 18-25 ms in the case of BerkeleyDb.
>
>
> Test 2)
> Produce 1000 messages with auto-ack mode, with a consumer already setup for
> the queue.
> When the 1000th message is processed, calculate it's latency by doing
> Latency =  (System.currentTimeInMillis() - message.getJMSTimeStamp()).
>
> Try to compute an *approximate* dequeue rate by doing
> numberOfMessageProcessed/Latency.
>
> In the above test, the results I got were such that,
>
> DerbyDb - 300 - 350 messages/second
> BDB - 40 - 50 messages/second
>
>
> I ran the tests against trunk(12/1)
>
> My Connection to Qpid has a max prefetch of 1 (as my use case requires this)
> and has tcp_nodelay set to true.
>
> I have attached the tests that I used for reference.
>
> Can someone please tell me if I'm doing something wrong in the above tests
> or if there is an additional configuration that I'm missing?
>
> Or are these results valid..? If valid, it will be great if the difference
> could be explained.
>
> Hoping to hear soon.
>
> Thank you,
> --
> -Praveen
>
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:[email protected]

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to