[
https://issues.apache.org/jira/browse/QPID-5135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769410#comment-13769410
]
Gordon Sim commented on QPID-5135:
----------------------------------
I think the key symptom described here is the significant drop in throughput as
observed by the receiver when the ring queue is at its limit as compared to
when it is empty. When empty the receive rate is able to keep up with the send
rate. However if the queue is allowed to fill up, then the receive rate drops
meaning it can never be cleared unless then send rate drops significantly for a
period.
With large messages I can indeed observe this behaviour. In this case, reducing
the capacity of the receiver seems to prevent the deterioration (e.g. send 2MB
messages at a rate of 100 per second to a ring queue with a 250MB size limit,
while receiving from it with ack-frequency 1 and capacity 200. If you restart
the receiver the rate it can achieve drops and the queue remains at limit,
dropping messages. However if the capacity is reduced to 2 or even 20, then the
receiver on restart is able to clear the queue - the exact numbers here may
vary with platform).
With smaller messages it becomes harder to observe this. If the send rate is
higher than the receive rate then obviously the queue fills up. With an
ack-frequency of 1 the send rate has to be limited for smaller messages. E.g.
send 50kB messages to the same ring queue with the same receiver on it at a
rate of 3500 per second (which on my test machine was about as fast as the
receiver could keep up with), then stop the receiver and allow the queue to
fill up and start dropping messages. On restarting the receiver it is able to
clear the backlog i.e. a corresponding drop in receive rate is not observable
here.
For 250k messages, the drop in throughput is from about ~900 message per second
when queue is empty to ~500 messages per second when it is full on my test
machine. Again, reducing the receivers capacity from 200 to 20 means the
receiver can clear the backlog (it can also keep up with a rate of ~1000
msgs/sec on an empty queue).
Summary:
In a system where large messages are sent through a ring queue, the receive
rate drops significantly when the queue is full as compared to where it is
empty. This degradation decreases as message size decreases. The cause is not
yet fully understood.
Again, for large messages, reducing the capacity can improve the receive rate.
The improvement is even more marked when receiving from a full ring queue. So
tuning may allow some affected systems to workaround the issue until a fix is
identified.
> System stalling
> ---------------
>
> Key: QPID-5135
> URL: https://issues.apache.org/jira/browse/QPID-5135
> Project: Qpid
> Issue Type: Bug
> Components: C++ Broker
> Affects Versions: 0.22
> Environment: RHEL5
> Reporter: Jimmy Jones
> Attachments: rx-test.pl, tx-test.pl
>
>
> See threads on qpid-user:
> http://qpid.2158936.n2.nabble.com/System-stalling-tp7597317.html and
> http://qpid.2158936.n2.nabble.com/Re-System-stalling-td7597940.html
> See attached scripts which reproduce the issue. Running rx-test.pl followed
> by tx-test.pl results in a system where the receiver can keep up with the
> producer (gets a message every <1s) (tx-test 118% CPU, qpidd 97% CPU, rx-test
> 60% CPU). However, if you stop rx-test and restart it (even after only a
> second or so), it starts to take 2s+ to receive messages, going up to about
> 6s on my system, so the ring quickly fills and overflows. Even if the
> producer is then stopped, messages are still only received every 3s - with
> qpidd on 100% CPU and the receiver on 5%. Also the resident size of qpidd
> reaches 5GB, yet the queue is only 2GB.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]