So the "simplest" solution right now would seem to be to use the client in 0-9-1 mode with the broker you have, unless that causes you a lot of issues... more recent clients (e.g. trunk - what will become 0.32) should still work with 0.16 (I haven't personally tested, but there really should be any reason why they would not). Is there a reason that this won't work for you?
If trying to stick with AMQP 0-10, I think the obvious code change to the broker would also need a code change client side... (to cope with messages being repossessed, or simply assigned with a lease). It may be possible to code a client library side change without changing the broker (basically reduce consumer credit to 0 as soon as one consumer has a message, and release any messages that have been prefetched), but that probably isn't a trivial piece of work. The only pure broker side change I can think of that wouldn't require a client library change (but might impact your application design), is to allow a single consumer to consume from multiple queues (i.e. you would have a single consumer which is associated with all your hundreds of queues, thus issuing one consumer credit will get you one message from one of the possible queues). This is something I want to add anyway, but it'd most likely be something added to the current broker code and not easy to backport (the broker internals have changed a bit since 0.16 in how consumers are represented). -- Rob On 24 August 2014 22:03, John Buisson <[email protected]> wrote: > Prediction might be possible, but it would certainly not be 100% and we'd > still get the large spikes. We have some message types that are > consistently long, while others that would mix long and not long. > > Once we hit this problem, I was anticipating this being an effort to fix. > We are essentially blocked at this point from continuing forward with > QPID. Our users will absolutely not accept the latency spikes, so > upgrading the broker and finding a solution is preferable to having to > stop. A change we could make to our very old version (0.16) would > obviously be the simplest, but I'm much more interested in finding some > kind of solution for now. > > I guess I should also note that forking off QPID and forcing a lack of > caching just on our branch is also an option. It would break the protocol, > but the protocol is seems to be the problem with how we use it. Not > something I want to jump in to, but an option. We will see if we can play > with the mix and match protocol in both 0.16 and the latest version. That > seems like the least-painful option so far. > > Thanks for the info, we really appreciate it :) > > John > > > > On Sun, Aug 24, 2014 at 12:44 PM, Rob Godfrey <[email protected]> > wrote: > > > Hi John, > > > > I can't immediately think of any elegant solutions (or really many > > inelegant ones) which wouldn't require a fairly significant change in > your > > application design. > > (About the best I can think of is that if you can anticipate the amount > of > > processing time a particular message is going to take once you receive > it, > > you reconfigure you client to close any consumers on other queues and > only > > reestablish after you have processed the message. (Note - I'd need to > > check if in 0-10 closing the consumer actually returns prefetched > messages, > > I know in the 0-9-1 code it doesn't actually return messages until you > > close the session...). > > > > In general is it the case that messages on a given queue take a > predictable > > amount of time (i.e. that there are some queues for which every message > is > > going to take an hour to process, whereas for others all messages will > only > > take milliseconds) or is it the case that the monster messages are > > distributed across many queues which might also hold millisecond jobs. > > > > Other than that, as discussed, changing the consuming client to use the > > 0-9-1 protocol will give you session level flow control. The current > trunk > > code (as of about 30 minutes ago) should also support the use of ADDR > style > > addresses (i.e. the address style that could only previously be used in > > 0-10). > > > > I'm certainly going to spend some time thinking about change that we in > the > > Qpid development community can make in either the client or the broker > that > > could work around this problem for you... but I'm not sure I have any > > immediate answers there (and I guess upgrading the broker is probably a > big > > change to ask you to take on) > > > > -- Rob. > > > > > > On 24 August 2014 16:23, John Buisson <[email protected]> wrote: > > > > > We are having some pretty major problems with this, so any advice you > can > > > give would be appreciated. We have an extremely diverse group of 450+ > > > types of messages. They range from a few ms processing time to several > > > hours and we isolate them by queue. With this setup, we are hitting > > > problems where a high throughput message gets "stuck" behind a long > > running > > > message. This can give us spike of hours on our dequeue latency where > > the > > > only good reason for it is the caching of the server.... We asked a > > pretty > > > specific question, but any thoughts on how we could work around the > > larger > > > issue would be very much appreciated! > > > > > > John > > > > > > > > > > > > On Sat, Aug 23, 2014 at 3:36 AM, Rob Godfrey <[email protected]> > > > wrote: > > > > > > > For information, if you use a mixture of clients using AMQP > > > 0-8/0-9/0-9-1 > > > > (which are all substantially the same protocol) and AMQP 0-10 (which > is > > > a a > > > > bit different) then the Java Broker should be able to translate > > > > automatically between them allowing messages sent from one protocol > to > > be > > > > received by the other. As long as you are using standard JMS any > such > > > > translation should be pretty much invisible. If you are doing > non-JMS > > > > things like sending Lists as values in the application headers then > you > > > may > > > > run into issues. The AMQP 0-9(-1) <-> AMQP 0-10 conversion in the > 0.30 > > > > version of the broker has been improved and should deal with this > case > > > and > > > > a few others. > > > > > > > > As you've discovered the 0-8/9/9-1 codepath doesn't currently support > > the > > > > "ADDR" addressing syntax... Unfortunately the current implementation > > of > > > > that is somewhat mixed in with 0-10 specific features. > > > > > > > > -- Rob > > > > > > > > > > > > On 23 August 2014 09:09, xiaodan.wang <[email protected]> > > > wrote: > > > > > > > > > Thanks Robbie & Rob! Was able to use your suggestion to force the > > > client > > > > to > > > > > use AMQP 0-9, will re-run our tests to validate session-wide > > > prefetching. > > > > > > > > > > @Vijay, unfortunately ran into "The new addressing based sytanx is > > not > > > > > supported for AMQP 0-8/0-9 versions" exception when trying to > create > > a > > > > > consumer using AMQP 0-9. Will get it sorted out tomorrow :) > > > > > > > > > > > > > > > > > > > > -- > > > > > View this message in context: > > > > > > > > > > > > > > > http://qpid.2158936.n2.nabble.com/Re-1-Queue-with-2-Consumers-turn-off-pre-fetching-tp6934582p7612411.html > > > > > Sent from the Apache Qpid users mailing list archive at Nabble.com. > > > > > > > > > > > --------------------------------------------------------------------- > > > > > To unsubscribe, e-mail: [email protected] > > > > > For additional commands, e-mail: [email protected] > > > > > > > > > > > > > > > > > > > >
