I'm sorry for spaming,
it seems that in my db.properties file on second MGMT srever, I had
127.0.0.1 as the Cluster IP.
After this was changed to real IP address, it seems now that I dont have
those spam log messages, seems all fine for some hours.
On 5 June 2015 at 16:24, Andrija Panic
Hi,
any hint on how to proceed ?
on haproxy I see rougly 50%/50% sessions across 2 backend servers.
But inside DB, it all points to the one mgmt_server_ip...
Thanks,
Andrija
On 4 June 2015 at 19:27, Andrija Panic andrija.pa...@gmail.com wrote:
And if of any help another hint:
while Im
And if of any help another hint:
while Im having this lines sent to logs in high volume...if I stop second
mgmt server, first one (that is making all these lines, doesnt stop to make
them), so log is still heavily writen to - only when I also restart mgmt on
1st node (2nd node is down), then
Just checked, in the HOSTS table, all agents are connected (via haproxy) to
the first mgmt server...I just restarted haproxy, and still inside the DB,
it says same mgmt_server_id for all agents - which is not really true.
Actually, on the haproxy itslef (statistics page) I can see almoust 50%-50%
And I could add - these lines (in this volume) only appears on first mgmt
server (Actually I have 2 separate, but identical ACS installations, and
same behaviour).
On 4 June 2015 at 19:18, Andrija Panic andrija.pa...@gmail.com wrote:
Just checked, in the HOSTS table, all agents are connected
Thanks Koushik,
I will check and let you know - but 11GB log file for 10h ? I dont expect
this is expected :)
I understand that the message is there because of setup, just an awful lot
of lines
Will check thx for the help !
Andrija
On 4 June 2015 at 18:53, Koushik Das
This is expected in a clustered MS setup. What is the distribution of HV hosts
across these MS (check host table in db for MS id)? MS owning the HV host
processes all commands for that host.
Grep for the sequence numbers (for e.g. 73-7374644389819187201) in both MS logs
to correlate.
On
Hi,
I have 2 ACS MGMT servers, loadbalanced properly (AFAIK), and sometimes it
happens that on the first node, we have extremem number of folowing line
entries in the log fie, which causes many GB log in just few hours or less:
(as you can see here they are not even that frequent, but sometimes,