We tried some of these (well I did last night), but the issue was that 
eventually rabbitmq actually died.  I was trying some of the eval commands to 
try to get what was in the mgmt_db, bet any get-status call eventually lead to 
a timeout error.  Part of the problem is that we can go from a warning to a 
zomg out of memory in under 2 minutes.  Last night it was taking only 2 hours 
to chew thew 40GB of ram.  Messaging rates were in the 150-300/s which is not 
all that high (another cell is doing a constant 1k-2k).

___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Matt Fischer <m...@mattfischer.com<mailto:m...@mattfischer.com>>
Date: Tuesday, July 5, 2016 at 11:25 AM
To: Joshua Harlow <harlo...@fastmail.com<mailto:harlo...@fastmail.com>>
Cc: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>, 
OpenStack Operators 
<openstack-operat...@lists.openstack.org<mailto:openstack-operat...@lists.openstack.org>>
Subject: Re: [Openstack-operators] [nova] Rabbit-mq 3.4 crashing (anyone else 
seen this?)


Yes! This happens often but I'd not call it a crash, just the mgmt db gets 
behind then eats all the memory. We've started monitoring it and have runbooks 
on how to bounce just the mgmt db. Here are my notes on that:

restart rabbitmq mgmt server - this seems to clear the memory usage.

rabbitmqctl eval 'application:stop(rabbitmq_management).'
rabbitmqctl eval 'application:start(rabbitmq_management).'

run GC on rabbit_mgmt_db:
rabbitmqctl eval '(erlang:garbage_collect(global:whereis_name(rabbit_mgmt_db)))'

status of rabbit_mgmt_db:
rabbitmqctl eval 'sys:get_status(global:whereis_name(rabbit_mgmt_db)).'

Rabbitmq mgmt DB how much memory is used:
/usr/sbin/rabbitmqctl status | grep mgmt_db

Unfortunately I didn't see that an upgrade would fix for sure and any settings 
changes to reduce the number of monitored events also require a restart of the 
cluster. The other issue with an upgrade for us is the ancient version of 
erlang shipped with trusty. When we upgrade to Xenial we'll upgrade erlang and 
rabbit and hope it goes away. I'll also probably tweak the settings on 
retention of events then too.

Also for the record the GC doesn't seem to help at all.

On Jul 5, 2016 11:05 AM, "Joshua Harlow" 
<harlo...@fastmail.com<mailto:harlo...@fastmail.com>> wrote:
Hi ops and dev-folks,

We over at godaddy (running rabbitmq with openstack) have been hitting a issue 
that has been causing the `rabbit_mgmt_db` consuming nearly all the processes 
memory (after a given amount of time),

We've been thinking that this bug (or bugs?) may have existed for a while and 
our dual-version-path (where we upgrade the control plane and then 
slowly/eventually upgrade the compute nodes to the same version) has somehow 
triggered this memory leaking bug/issue since it has happened most prominently 
on our cloud which was running nova-compute at kilo and the other services at 
liberty (thus using the versioned objects code path more frequently due to 
needing translations of objects).

The rabbit we are running is 3.4.0 on CentOS Linux release 7.2.1511 with kernel 
3.10.0-327.4.4.el7.x86_64 (do note that upgrading to 3.6.2 seems to make the 
issue go away),

# rpm -qa | grep rabbit

rabbitmq-server-3.4.0-1.noarch

The logs that seem relevant:

```
**********************************************************
*** Publishers will be blocked until this alarm clears ***
**********************************************************

=INFO REPORT==== 1-Jul-2016::16:37:46 ===
accepting AMQP connection <0.23638.342> 
(127.0.0.1:51932<http://127.0.0.1:51932> -> 
127.0.0.1:5671<http://127.0.0.1:5671>)

=INFO REPORT==== 1-Jul-2016::16:37:47 ===
vm_memory_high_watermark clear. Memory used:29910180640 allowed:47126781542
```

This happens quite often, the crashes have been affecting our cloud over the 
weekend (which made some dev/ops not so happy especially due to the july 4th 
mini-vacation),

Looking to see if anyone else has seen anything similar?

For those interested this is the upstream bug/mail that I'm also seeing about 
getting confirmation from the upstream users/devs (which also has erlang crash 
dumps attached/linked),

https://groups.google.com/forum/#!topic/rabbitmq-users/FeBK7iXUcLg

Thanks,

-Josh

_______________________________________________
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org<mailto:openstack-operat...@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to