I tried the failure-timeout.
But I noticed that when the failure-timeout resets the failcount the
resource becomes OK in the crm_mon view.
However the resource is still failing.
This shouldn't happen, Can this behaviour be changed with some setting?
gr.
Johan
On 24-04-13 07:23, Andrew
Am Mittwoch, 24. April 2013, 08:35:29 schrieb Johan Huysmans:
I tried the failure-timeout.
But I noticed that when the failure-timeout resets the failcount the
resource becomes OK in the crm_mon view.
However the resource is still failing.
This shouldn't happen, Can this behaviour be
I'm still investigating what happens in my situation.
So I have a cloned resource, with on-fail set to block.
I configured the failure-timeout to 30s.
An other resource groups depends on the cloned resource (order
colocation configured)
-- start situation
* scope=status
I have tried to make this test, because I had the same problem.
Origin:
One node cluster, node int2node1 running with IP address 10.16.242.231, quorum ignore, DC int2node1
[root@int2node1 sysconfig]# crm_mon -1
Last updated: Wed Apr 24 09:49:32 2013
Last change: Wed Apr 24
On 24-04-13 13:24, Lars Marowsky-Bree wrote:
On 2013-04-24T10:37:24, Johan Huysmans johan.huysm...@inuits.be wrote:
-- start situation
* scope=status name=fail-count-d_tomcat value=0
* depending resource group running on node
* crm_mon shows everything ok
-- a failure occurs
* scope=status
Hi Angel,
two hints from my side. As you're working with ubuntu
ask in this list which setup is or will be the best
concerning corosync + pacemaker. I'm pretty sure
(but I really don't know) that you'll get the advice
to drop cman.
When you use cman + pacemaker than stonithing works
as
We are building a new web farm to replace our 7 year old system. The old
system used ipvs/ldirectord/heartbeat to implement redundant load
balancers. All web server nodes were physical boxes.
The proposed new system will utilize approximately 24 virtual machines
as web servers. Load
On 13-04-24 01:16 AM, Andrew Beekhof wrote:
Almost certainly you are hitting:
https://bugzilla.redhat.com/show_bug.cgi?id=951340
Yup. The patch posted there fixed it.
I am doing my best to convince people that make decisions that this is worthy
of an update before 6.5.
I've added
Hi,
what I understand you want split vhost on separate virtual ip adresses and
join all nodes into one cluster?
I dont think it is a good idea in case of web farm, as you mentioned it
wont scale so good. What if traffic on certain vhost (virtual ip) grow up
and you would need to spread it across