There are two angles to this, with a large cache and without. In essence priority is implemented on read, so essentially reordering before dispatch. without a cache (with a large message backlog) this means multiple seeks.
Throwing memory at the broker will help, the larger the message cache the better so that the priority support in the cursors does the reordering (on write) as the message is cached on write. KahaDB behaves like that and as rob says, that reflects the layering. The JDBC store lets the underling store do the ordering on read so it may show better scalability with larger backlogs due to the priority indexing. I guess memory resources on the backend will play a large part there also. On 8 November 2012 01:26, sdonovan_uk <[email protected]> wrote: > What is the expectation for queue size and performance when using > prioritization? > > On a queue that has prioritization enabled, if you have more than 60,000 > messages, performance **plummets** -- approximately 10ms per each additional > 1000 messages (timed on a modern Windows server). > > I find that odd, given that prioritization could be (mostly?) handled on > queue-write, not read. > > Is this expected, or a bug? > > Sean > > > > > > -- > View this message in context: > http://activemq.2283324.n4.nabble.com/Queue-prioritization-tp4658991.html > Sent from the ActiveMQ - Dev mailing list archive at Nabble.com. -- http://redhat.com http://blog.garytully.com
