Thanks for writing Robbie. That explains.

On Wed, Jan 4, 2012 at 1:11 PM, Robbie Gemmell <robbie.gemm...@gmail.com>wrote:

> Hi Praveen,
>
> I was using the head of trunk at the time of sending the message, and
> was testing with your test classes. Persistent messaging performance
> is almost entirely dependant on your storage, so down to a certain
> extreme you wont really see any difference with varying memory or cpu
> resources.
>
> I ran the tests on a 3.5 year old Ubuntu virtual machine assigned 2
> threads and 1.25GB of ram, running on an underlying quad core box with
> 8GB of ram running Windows 7. The probable reason it performed faster
> would be that its storage was being held on a (2.5 year old) SSD.
>
> Rob has done some work on trunk now to improve persistent messaging
> performance a bit, its probably worth running your tests again with
> that. I cant currently run the tests on the machine I used previously
> as recent hurricane-level winds have left me without power or
> telephone lines at home for the immediate future :(  There are some
> other changes we expect would improve performance further that we are
> likely to look at doing in future, but they will require much more
> significant changes be made.
>
> Robbie
>
> On 19 December 2011 18:57, Praveen M <lefthandma...@gmail.com> wrote:
> > Hi Robbie,
> >
> >             I tried grabbing the latest changes and re-running my tests.
> I
> > didn't see the number that you mentioned in your mail. :( It kinda
> remains
> > at what I had mentioned in my earlier email.
> >
> > Can you please tell me which changelist# you ran against so that I can
> try
> > again?
> >
> > I'm running with allocated 4GB memory for the Broker and don't see any
> > resource constraints in terms of memory and CPU.
> > My test is on a box with 12GB Ram and 12 CPU cores.
> >
> > I think I might be missing something. Did you do any specific setting
> > changes to your broker config, and were the results that you posted from
> > running the tests that I emailed?
> >
> > Thanks,
> > Praveen
> >
> > On Mon, Dec 19, 2011 at 10:45 AM, Praveen M <lefthandma...@gmail.com>
> wrote:
> >
> >> Hi Robbie,
> >>
> >> Thank you for the mail. I will try using the latest changes to grab the
> >> recent
> >> performance tweaks and run my tests over again.
> >>
> >> Yep, I made the test enqueue and dequeue at the same time, as I was
> trying
> >> to simulate
> >> something close to how it'd work in production. I do know that the
> dequeue
> >> throughput rate
> >> is not a very accurate one. :) But yeah, like you said, all I was trying
> >> to check is more of
> >> which one performs better berkeley/derby.
> >>
> >> Given that derby outperforms berkeley for some use cases, what would be
> >> your recommendation to use as a
> >> persistant store? I understand that berkeley is used more widely than
> >> derby in production by
> >> various users of qpid. Would that mean berkeley can be expected to be a
> >> sheer more
> >> robust a product as it might have been tested more thorough??
> >>
> >> Would you have a recommendation for picking one over the other as the
> >> MessageStore?
> >>
> >> Thanks to you and the rest of the team for the work that you guys are
> >> putting together towards performance tuning the product.
> >> -
> >> Praveen
> >>
> >>
> >> On Sun, Dec 18, 2011 at 6:31 PM, Robbie Gemmell <
> robbie.gemm...@gmail.com>wrote:
> >>
> >>> Hi Praveen,
> >>>
> >>> I notice both your tests actually seem to enqueue and dequeue messages
> >>> at the same time (since you commit per publish and the message
> >>> listeners will already be recieving a message which then gets commited
> >>> by the next publish due to the single session in use, leaving a
> >>> message on the queue at the end), so you might not be getting the
> >>> precise number you are looking for in the first test, but that doesnt
> >>> really change the relative results it gives.
> >>>
> >>> I didnt see quite the same disparity when I ran the tests on my box,
> >>> but the Derby store did still win significantly (giving ~2.3 vs 4.4ms
> >>> and 350 vs 600msg/s best cases), though there have been some changes
> >>> made on trunk since your runs to massively improve transient messaging
> >>> performance of the Java broker which may also have influenced things
> >>> here a little. Either way, although it makes the test suite runs take
> >>> significantly longer it would seem that in actual use the Derby store
> >>> is currently noticably faster in at least some use cases. As I have
> >>> said previously our attention to performance of the Java broker has
> >>> been lacking for a while, but we are going to spend some quality time
> >>> looking at performance testing very soon now, and given the recent
> >>> transient improvements will undoubtedly be looking at persistent
> >>> performance going forward as well.
> >>>
> >>> Robbie
> >>>
> >>> On 3 December 2011 00:45, Praveen M <lefthandma...@gmail.com> wrote:
> >>> > Hi,
> >>> >
> >>> >    I've been trying to benchmark the BerkeleyDb against DerbyDb with
> the
> >>> > java broker to find which DB is more perform-ant against the java
> >>> broker.
> >>> >
> >>> > I have heard from earlier discussing that berkeleydb runs faster in
> the
> >>> > scalability tests of Qpid. However, some of my tests showed the
> >>> contrary.
> >>> >
> >>> > I had setup BDB using the "ant build release-bin
> -Dmodules.opt=bdbstore
> >>> > -Ddownload-bdb=true" as directed in Robbie's earlier email in a
> similar
> >>> > topic thread.
> >>> >
> >>> > I tried running two tests in particular which are of interest to me
> >>> >
> >>> > Test 1)
> >>> > Produce 1000 messages to the broker in transacted mode such that
> after
> >>> every
> >>> > enqueue you commit the transaction.
> >>> >
> >>> > The time taken to enqueue a message in transacted mode from the above
> >>> test
> >>> > is approx 5-8 ms for derbyDb and about 18-25 ms in the case of
> >>> BerkeleyDb.
> >>> >
> >>> >
> >>> > Test 2)
> >>> > Produce 1000 messages with auto-ack mode, with a consumer already
> setup
> >>> for
> >>> > the queue.
> >>> > When the 1000th message is processed, calculate it's latency by doing
> >>> > Latency =  (System.currentTimeInMillis() -
> message.getJMSTimeStamp()).
> >>> >
> >>> > Try to compute an *approximate* dequeue rate by doing
> >>> > numberOfMessageProcessed/Latency.
> >>> >
> >>> > In the above test, the results I got were such that,
> >>> >
> >>> > DerbyDb - 300 - 350 messages/second
> >>> > BDB - 40 - 50 messages/second
> >>> >
> >>> >
> >>> > I ran the tests against trunk(12/1)
> >>> >
> >>> > My Connection to Qpid has a max prefetch of 1 (as my use case
> requires
> >>> this)
> >>> > and has tcp_nodelay set to true.
> >>> >
> >>> > I have attached the tests that I used for reference.
> >>> >
> >>> > Can someone please tell me if I'm doing something wrong in the above
> >>> tests
> >>> > or if there is an additional configuration that I'm missing?
> >>> >
> >>> > Or are these results valid..? If valid, it will be great if the
> >>> difference
> >>> > could be explained.
> >>> >
> >>> > Hoping to hear soon.
> >>> >
> >>> > Thank you,
> >>> > --
> >>> > -Praveen
> >>> >
> >>> >
> >>> > ---------------------------------------------------------------------
> >>> > Apache Qpid - AMQP Messaging Implementation
> >>> > Project:      http://qpid.apache.org
> >>> > Use/Interact: mailto:users-subscr...@qpid.apache.org
> >>>
> >>> ---------------------------------------------------------------------
> >>> Apache Qpid - AMQP Messaging Implementation
> >>> Project:      http://qpid.apache.org
> >>> Use/Interact: mailto:users-subscr...@qpid.apache.org
> >>>
> >>>
> >>
> >>
> >> --
> >> -Praveen
> >>
> >
> >
> >
> > --
> > -Praveen
>
> ---------------------------------------------------------------------
> Apache Qpid - AMQP Messaging Implementation
> Project:      http://qpid.apache.org
> Use/Interact: mailto:users-subscr...@qpid.apache.org
>
>


-- 
-Praveen

Reply via email to