Hi Praveen, Do you notice the same behavior even when you run the broker without Derby message store?. AFAIK this has nothing to do with the persistence storage you use.
Thanks, Danushka On Wed, Oct 12, 2011 at 4:14 AM, Praveen M <[email protected]> wrote: > Hi, > > I'm an apache qpid newbie and am trying to benchmark Qpid Java Broker to > see > if it could be use for one of my usecase. > > My UseCase requires the ability to create atleast 20K persistent queues and > have them all running in parallel. > > I am using the DerbyMessageStore as I understand that the default > MemoryMessageQueue is not persistant across broker restarts. > > I'm running the broker with a heap of 4GB and options QPID_OPTS set to > -Damqj.read_write_pool_size=32 -Dmax_prefetch=1 > > > My test does the following: > > 1) Creates a queue and registers a listener on that queue. I do this upto > 20K times for 20K distinct queues. I create the queues with the following > option > create: always , node : {type : queue, durable : true}} > - this step goes quite fine. I was monitoring the memory usage during > this step and it almost always stayed stable around 500-800MB > 2) I produce messages for the queues (one message for each queues) and the > messages are consumed by the registered handlers in step 1. > - When this step starts, the memory usage just shoots up and exhausts my > 4GB memory all together. > > > Can someone please help me explaining why I am seeing this kind of a > behavior? > > Also, Can you please point out if I'm missing some setting or doing > something completely wrong/stupid? > > > Thanks, > -- > -Praveen >
