On 03/18/2011 05:54 PM, fadams wrote:
Hi again,
This still seems to be blowing up on me I'm afraid....

Given the last bit of advice from Gordon Sim (thanks for all your help so
far Gordon!!) I tried:

qpidd --default-queue-limit 0

Then I added my queue:

qpid-config add queue myqueue --durable --max-queue-count 50000 --file-size
5000 --file-count 16 --limit-policy ring

to create a queue of roughly 5GB

This seemed to be added OK, but when I fired up my producer (with no
consumers enabled) I get an exception thrown almost instantly (I guess on
the first flush)

qpidd says:

error Execution exception: resource-limit-exceeded: Policy exceeded on
myqueue, policy: size: unlimited; count: max=50000, current=0; type=ring
(qpid/broker/QueuePolicy.cpp:86)

Sorry! That is indeed another bug (https://issues.apache.org/jira/browse/QPID-3180). I've committed a fix to trunk for that. Apologies for the unhelpful advice I gave previously and thanks for your patience and perseverance here.

It looks very odd that qpidd says "Policy exceeded" and also "policy: size:
unlimited"



My (Java) producer says:

ItemProducer onException
ItemProducer: exception: Invalid Session
javax.jms.IllegalStateException: Invalid Session
        at
org.apache.qpid.client.BasicMessageProducer.checkPreConditions(BasicMessageProducer.java:550)
        at
org.apache.qpid.client.BasicMessageProducer.send(BasicMessageProducer.java:277)

After 12 messages.



I'd love to know what I'm doing wrong and suggestions for qpid-config
options to get me a circular queue larger than available memory would be
very welcome indeed.

That isn't possible at present. What we really want for handling queues larger than the available memory is a proper paging solution. All we have at present however is a cheap hack to reuse the policy mechanism ('flow to disk') and release message content from memory once the limit is reached. That means you can't combine the ring and flow-to-disk policies and also makes the handling of queues larger than memory quite slow. It also only really helps at all for large messages; a message 'handle' (including all headers) is still kept in memory for all messages.

The reason that I care about this is that I've got a real time producer
client on an operational system and I'm using qpid to work rather like a
data mart, with consumers able to connect in and suck up the stuff my
producer is sending. What I'd really like to avoid is the case where a
consumer client fails (or is too slow) which then throws an exception back
to the producer - the producer may be providing data to dozens of consumers
and doesn't especially care one of them fails (that's their problem!!) OTOH
I would like to provide a reasonable level of buffering for consumers to
allow for reasonable consumer outages (hence why I'd like to be able to
support queues larger than my available memory).

It would be really bad in my system for a failing consumer to kill my
producer. The producer is pretty well written to handle exceptions and
reconnect, but with high rate data and dozens of consumers I'm not keen to
have a failing consumer affect everything else on a high value critical
operational system.

I'd really appreciate any advice for handling this scenario - I'd have
thought that this would be a fairly common sort of pattern??

Yes, it is and the ring queue policy is currently the solution. Its just that the support for very large queues (larger than available memory) is not really adequate and is not orthogonal to the solution for ring queues.

Does the data published become irrelevant in any way? E.g. does it expire after some time (TTL may be useful) or do later updates invalidate earlier messages (where perhaps an LVQ might help)...

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to