Re
"Your sure there is no backed up queue anywhere?"
Pretty sure, it's a C++ producer sending messages to amq.match to a
broker on the same host, that broker has a single 1GB queue for the
queue route federated to the destination broker. The queue for the queue
route has been set up with ring policy e.g. circular queue.
At some stage the queue could well have filled, but it's a circular
queue so I wouldn't expect to see qpidd consume more than a few GB
certainly not 35GB
So what we're seeing is
crappy network = broker memory creeps up until the system eventually
dies under the weight of swapping
decent network = broker memory behaves as one would expect
We're seeing connection closed messages in the logs when the network is
behaving poorly.
I'd agree that It does seem weird that simple reconnections should
appear to cause such a large growth but it does appear to be a
consistent pattern and I can't think of any other explanation.
The queue is a 1GB circular non-persistent queue bound to amq.match via
a couple of fairly basic headers.
I did wonder if there was some headers exchange weirdness going on as
that seems less well supported than the other message patterns, we're
using queue routes because we want to be able to have a non-default size
and ring policy, but with 0.8 we had no other choice anyway as there was
a bug with dynamic routes (I know you fixed that in 0.12) and qpid-route
doesn't support static routes using amg.match so I'm slightly suspicious
that we might see something unusual as I think we may use the headers
exchange more than most.
I'm not convinced the Jira you mentioned is related as we're definitely
supplying the correct federation information, in our case the network
problem seems to be causing the connection to drop periodically out and
auto-reconnect.
Frase
On 01/03/12 15:04, Gordon Sim wrote:
On 02/29/2012 07:07 PM, Fraser Adams wrote:
Hi All,
I think that we may have stumbled across a potential memory/resource
leak.
We have one particular set up where we have a C++ producer client (using
qpid::client - don't ask, it's a long story.....) this writes to a 0.8
broker hosted on the same server. That broker is then federated via a
queue route to amq.match on another (0.8) broker. The queue route is a
source route set up via qpid-route -s
We've been having all sorts of fun and games with respect to
performance, which we've narrowed down to some dodgy networking.
However one of the other effects that we've noticed is that the broker
co-located with the producer client eats memory. The queue for the queue
route is 1GB but qpidd eventually grows to ~35GB and sends the whole set
up into swap.
So with respect to the network problem we're suspecting a dodgy switch
somewhere, what is interesting is that when we checked with ethtool the
NIC was reporting half duplex had been negotiated - ouch!!! hence why we
suspect a dodgy switch somewhere.
Now when the NIC was explicitly set to 100 base/T full duplex our
performance rocketed and the broker on the producer system appears
(touch wood) to have stable memory performance.
What I'm suspecting is that the dodgy network link has been causing
connection drop-outs and the broker is automatically reconnecting (logs
are confirming this) and I'm thinking that there is a resource leak
somewhere during the reconnection process.
https://issues.apache.org/jira/browse/QPID-3447 perhaps? Though I
wouldn't have expected that to cause such a large growth in memory.
Your sure there is no backed up queue anywhere?
---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:[email protected]
---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project: http://qpid.apache.org
Use/Interact: mailto:[email protected]