I've used RHEL MRG (Qpid) in the last 8 years, and what I could say is that
qpidd C++ broker never releases back memory to SO.

I've found that the queue code use std::vector in some paths, and need to
call vector.shrink_to_fit() because std::vector (and some other C++ STL
container) never releases back memory, only when destructed. I didn't
verified if applying a patch that call shrink_to_fit when the queue is
empty could help, but is a good point to start checking.

On Tue, Feb 27, 2018 at 7:12 PM, CLIVE <[email protected]> wrote:

> Hi,
>
> I have had a similar problem with a production QPID system, running Cento
> 6.8 for the past 14 months.
>
> I tried the good advice that Ted gave, using the malloc tuning environment
> variables, but unfortunately
> this made no difference to our environment. We would get about 5 days out
> of the system until the broker
> used up all 64G of memory and got killed by the kernels OOM process.
>
> I even tried using the gperftools tcmalloc library, but again this has
> made no difference to the memory consumption.
>
> Even moving up from version 0.34-1.36 didn't seem to make any difference.
> Then about 8 weeks ago, out of the blue, the
> problem just went away. The QPID broker is now staying solid at 5% memory
> usage. We have not changed our usage pattern
> in any way, still the same number of consumers and producers, but our
> message flow rate has increased.
>
> I have tried everything to replicate this issue, but to no avail.
> Interestingly the problem only affects our production
> system, all or other environments are fine. This leads me to believe it is
> somehow flow related, which I know sounds
> a bit stupid, but we had been running in production for about a year
> before the memory problem arose (at a point in time when the
> flow rates increased).
>
> Then again 8 weeks ago the system encountered another increased step
> change in message rates and the memory problem has now gone away (for the
> time being!)
>
> I don't know how flow rate would affect the memory in this way, hopefully
> someone else might have some ideas.
>
> The other area I looked at was message size (we have a wide range of
> message sizes in our system (1k-10M), but I could never replicate the issue.
>
> Sorry I cannot to be of more help, but if you do uncover the problem I
> would be very interested to hear about it.
>
> Clive
>
>
> On 27/02/2018 20:36, jbelch wrote:
>
>> Thanks.  I will try it and let you know if it works.
>>
>>
>>
>> --
>> Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936
>> .html
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

Reply via email to