On 5/4/06, Attila_Szegedi <[EMAIL PROTECTED]> wrote: > > > In our use case, we're doing bulk processing most of the time, so the vast > majority of the messages are normal priority. Every now and then though, > we > have a situation where we're processing for an interactive user (imagine a > consumer at the end of a user interface waiting for our automated backend > to > respond, connecting to us as a result of a notification sent to him from a > bulk work unit, needing user interaction to proceed). We need to make sure > that the messages that belong to this realtime interaction with the user > are > expedited, for obvious consumer experience reasons. In such a situation, > your suggestion is unfortunately suboptimal, as you implied that the ratio > of high to low priority messages is higher than it is for us.
It doesn't really matter what the ratios are; with selectors you can have a guarrenteed group of consumers ready to process user interactions when they come along. Now, keeping a separate queue might work. Using selectors is logically equivalent to using separate queues - though the runtime behaviour can differ a little in practice on massive queues. Its a little easier on the broker to implement efficiently if you use separate queues as its a little harder to implement sparse consumption on big queues efficiently. But either approaches, selectors or separate destinations will solve your problem. Although I can imagine - JMS spec > is fuzzy on this - that if I have a single connection with pool of > sessions > with message listeners installed in each one for both the queue named > "NORM" > and "HIGH" then the JMS broker will still dispatch to them in aggregate > FIFO > order of all queues that the connection has consumers for, so that again > defeats the purpose. I think you missed my point. You the end user can decide what processes consume from what destination & selector and how many threads are used to process the messages. So you could have (say) 1000 threads just processing HIGH stuff and a separate pool of 500 threads processing NORM (or all the rest). So at any point in time you are guarrenteed twice the resources to deal with HIGH messages. Taking another approach; imagine you had 1000 threads processing HIGH and 1 thread processing NORM and you had 1 million messages on the queue. The 1000 HIGH consumers would consume all the HIGH messages pretty quickly, leaving any NORM message on the queue - so fairly soon you'd have lots of messages sat on the queue which were just NORM; then as new messages arrive, if they are HIGH they'd go to the HIGH consumers etc. I'm not saying this holds for ActiveMQ - I have no idea > at this point, I'm just trying to avoid developing a solution that'll be > affected by the broker implementation. Using selectors or different queues will work on any JMS provider; relying on JMS priority to be implemented well will be completely dependent on a JMS provider as few of them do it well and they all will behave differently as adding priority handling generally adversley affects performance. Of course, I could always keep up a second connection with only few sessions > consuming the high priority queue exclusively. With two connections, > they're > handled as two distinct clients by the broker, so they're independent of > one > another. FWIW it doesn't matter that much how many connections you have; one or 10. Whats more important is how many sessions & consumers you have (i.e. how many concurrent threads used to process messages). > > It is however quite tedious to implement such a two-lane queuing system > across the whole distributed system, which has a rather nontrivial number > of > interacting JVMs communicating over quite a lot of queues. Not really - its actually quite trivial to do. Just create a small XML config file with Jencks and you can declaratively configure the sizes of thread pools and levels of concurrent consumption of each destination & selector to suit your priority needs - then you can define exactly how many concurrent threads will be used to process each slice of your traffic. http://jencks.org/Message+Driven+POJOs It'd be *much* > more simple to have JMS handle all of this for me if I set a priority on a > message - that's what frameworks and middlewares are for, right? To take > away and contain complexity from the stuff I have to write :-) Giving a vague hint over which one of two messages should be processed first, if they are next to each other in some buffer doesn't offer that much control to be honest. i.e. JMS priority handling is not that great a feature even if JMS providers implemented it correctly (and took the performance hit for doing so). You have *lots* more control by deciding how many threads on which machines are going to process a given slice of your traffic together with deciding which thread pools they will use and their levels of concurrency with other slices of traffic etc. e.g. use those fancy fast T2000 boxes just for your gold customers, use those crappy ancient PCs for your bronze customers and everything else too etc. -- James ------- http://radio.weblogs.com/0112098/
