Hello,
I have an use case where I need to create hundreds of thousands of queues, each one subscribed to a different topic
(therefore I have as many topics as queues).
I set up a test with a single producer generating data on a randomly chosen topic, and a receiver retrieving data from
the queues (and throwing it away).
I'm using the JMS api, and doing the obvious thing makes the throughput drop dramatically from 10k msg/sec with a single
topic/queue (around the top my network adapter can sustain) to 20 msg/sec with 100k topics/queues.
I found out that I can recover performance by using more JMS sessions and connections - e.g. create 4 connections with
100 sessions each, and randomly distributing the receiving queues on them.
This however is less than ideal, since with the JMS client a thread is created for each session, and I definitely don't
want 400 threads receiving data. The work I have to do is CPU-bound, and I don't want to waste time in context switching
when 2/4 threads can suffice.
Why does the throughput drop so badly with many topics/queues? Why adding sessions helps? Am I overlooking something, or
doing something wrong?
Thanks
Flavio
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]