I just noticed that on serverB, ps aux | grep mem shows the following:

/usr/lib64/erlang/erts-5.8.5/bin/beam -W w -K true -A30 -P 1048576 -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -noshell -noinput -sname rabbit@prowl -boot /var/lib/rabbitmq/mnesia/rabbit@serverB-plugins-expand/rabbit -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -kernel error_logger {file,"/var/log/rabbitmq/[email protected]"} -sasl sasl_error_logger {file,"/var/log/rabbitmq/[email protected]"} -os_mon start_cpu_sup true -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@serverB"

Shoudnt that be showing serverA instead? Also, should it be showing the FQDN (serverA.foobar.com) or not?

I am getting memcache errors from ServerB now as well:

WebApp Error: <class '_pylibmc.MemcachedError'>: error 47 from memcached_get(mq-adminuser-None-1_count(mailq.id): SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY, host: serverA.foobar.com:11211 -> libmemcached/get.cc:314

LOL, all this guessing is driving me mad. Wasted way to much time on this already, but I hate to give up.



On 2013-02-22 13:14, Raymond Norton wrote:
I asked the question because I am concluding that information is only
available on the localhost of my scanners, which explains the way node
stats are working for me. I'm presuming the connection to whatever
produces those stats is where my misconfiguration is.




On 02/22/2013 01:08 PM, Raymond Norton wrote:
The celery log




On 02/22/2013 12:00 PM, Mark Chaney wrote:
Where are you seeing that entry?

On 2013-02-22 11:52, Raymond Norton wrote:
Where does celery pull this info from:

Task get-system-status[2dc1142e-b526-4f2c-816e-0cb798362d83]
succeeded in 0.169036865234s: {'load': (0.6, 0.51, 0.42), 'uptime':
'18...


On 02/22/2013 11:14 AM, Mark Chaney wrote:
Are you talking about /var/log/rabbitmq/[email protected] on Server A? If so, its correctly showing connections from Server B. But it seems to be very basic logging.

On 2013-02-22 11:11, Raymond Norton wrote:
Does rabbitmq log provide any hints?



On 02/22/2013 11:08 AM, Mark Chaney wrote:
Hmm, I must have missed a step as when I reinstalled everything after I switched to Baruwa enterprise (I wanted upgrades/fixes to be easier to apply), I am no longer getting status to work on Server B. Not only does it not show the correct status on Server A for Server B. But Server B does nto show the correct status of itself. Though I just realized that I get an error when I try to release a message stored on Server A and Im doing it from Server B. Need to look into that first I guess.


On 2013-02-22 09:31, Raymond Norton wrote:
Does anyone know what I might be missing or have configured wrong here?


I set server A & B to use memcached, rabbitmq, and postgres of server A.

Server A I added it as a node to itself. Check status and celery
shows the request was properly processed.

Server B I added it as a node to itself. Check status and all comes back fine on Server A celery log, so we know rabitmq is working.

Added Server B as a node on Server A, but it comes up faulty and nothing is logged in celery. Same the other way around (Adding Server
A node to B)


However, I can release messages from either server, if I am on the
local web interface that the message passed through.


It seems like its the way rabbitmq is called from the local box, vs a remote connection. Either that, or I am missing a config change on
Server B.

_______________________________________________
Keep Baruwa FREE - http://pledgie.com/campaigns/12056

_______________________________________________
Keep Baruwa FREE - http://pledgie.com/campaigns/12056

_______________________________________________
Keep Baruwa FREE - http://pledgie.com/campaigns/12056


_______________________________________________
Keep Baruwa FREE - http://pledgie.com/campaigns/12056

Reply via email to