Just answering myself quickly so people don't waste their time reading
long logs and config.
Actually I simply forgot to define a location constraint for my
fencing resource. I _have to_ do it as I am using an opt-in cluster.
Sorry for the noise.
___
Hi,
I am setting up a cluster on debian wheezy.
I have installed pacemaker using the debian provided packages (so am
runing 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff).
I have roughly 10 nodes, among which some nodes are acting as SAN
(exporting block devices using AoE protocol) and others
again. In the mean time I'd be delighted to
hear about what you guys think about that.
Regards, Alex.
2014-03-07 4:21 GMT+01:00 Andrew Beekhof and...@beekhof.net:
On 3 Mar 2014, at 3:56 am, Alexandre alxg...@gmail.com wrote:
Hi,
I am setting up a cluster on debian wheezy.
I have installed
this pretty old version of pacemaker?
2014-03-08 10:36 GMT+01:00 Alexandre alxg...@gmail.com:
Hi Andrew,
I have tried to stop and start the first resource of the ordering
constraint (cln_san), hoping it would trigger a start attemps of the
second resource of the ordering constraint (cln_mailstore
Andrew Beekhof and...@beekhof.net:
On 9 Mar 2014, at 10:36 pm, Alexandre alxg...@gmail.com wrote:
So...,
It appears the problem doesn't come from the primitive but for the
cloned resource. If I use the primitive instead of the clone in the
order constraint (thus deleting the clone and the group
Hi,
I am configuring a cluster on nodes that doesn't have pcs installed
(pacemaker 1.1.7 with crmsh).
I would like to configure collocated sets of resources (as show
here:http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html-single/Pacemaker_Explained/#s-resource-sets-collocation)
in that
Have you tried to patch the monitor action of your RA, so that it set the a
temporary constraint location on the node to avoid it becoming master.
Something like
Location loc_splited_cluster -inf: MsRsc:Master $node
Not sure about the above crm syntax, but that's the idea.
Le 8 avr. 2014 02:52,
Le 10 avr. 2014 15:44, Christian Ciach derein...@gmail.com a écrit :
I don't really like the idea to periodically poll crm_node -q for the
current quorum state. No matter how frequently the monitor-function gets
called, there will always be a small time frame where both nodes will be in
the
Why did you hide the resource agent provider? Is it a custom one
Le 30 avr. 2014 01:10, Andrew Beekhof and...@beekhof.net a écrit :
On 29 Apr 2014, at 11:06 pm, Sékine Coulibaly scoulib...@gmail.com
wrote:
Hi,
Let me explain my use case. I'm using RHEL 6.3
fwiw, there are updates to
IIRC the xen RA uses 'xm'. However fixing the RAin is trivial and worked
for me (if you're using the same RA)
Le 2014-07-08 21:39, Tobias Reineck tobias.rein...@hotmail.de a écrit :
Hello,
I try to build a XEN HA cluster with pacemaker/corosync.
Xen 4.3 works on all nodes and also the xen
Actually I did it for the stonith resource agent external:xen0.
xm and xl are supposed to be semantically very close and as far as I can
see the ocf:heartbeat:Xen agent doesn't seem to use any xm command that
shouldn't work with xl.
What error do you have when using xl instead of xm?
Regards.
I have seen this behavior on several virtualsed environments. when vm
backup starts, the VM actually freezes for a (short?) Period of time.I
guess it then no more responding to the other cluster nodes thus triggering
unexpected fail over and/or fencing.I have this kind of behavior on VMware
env
I think you can use a single colocation with a set of resources. crmsh
allows you to create such a colocation with:
crm colocation vm_with_disks inf: vm_srv ( ms_disk_R:Master
ms_disk_S:Master )
This forces the cluster to place the master resources on the same host,
starting them without
You should use an opt out cluster. Set the cluster option
symmetrical=false. This will tell corosync not to place a resource anywhere
on the cluster, unless a location rule explicitly tell the cluster where it
should run.
Corosync will still monitor sql resources on www hosts and return rc 5 but
Le 13 nov. 2014 12:09, Arjun Pandey apandepub...@gmail.com a écrit :
Hi
I am running a 2 node cluster with this config
Master/Slave Set: foo-master [foo]
Masters: [ bharat ]
Slaves: [ ram ]
AC_FLT (ocf::pw:IPaddr): Started bharat
CR_CP_FLT (ocf::pw:IPaddr): Started bharat
CR_UP_FLT
Hi list,
I am facing a very strange issue.
I have setup a postgresql cluster (with streaming repl).
The replication works ok when started manually but the RA seems to never
promote any host where the resource is started.
my config is bellow:
node pp-obm-sgbd.upond.fr
node pp-obm-sgbd2.upond.fr \
to be closed and all users are encouraged to
switch. :)
On 20/02/15 02:18 PM, Alexandre wrote:
Hi list,
I am facing a very strange issue. I have setup a postgresql cluster
(with streaming repl). The replication works ok when started
manually but the RA seems to never promote any host where
Hi,
I have a pacemaker / corosync / cman cluster running on redhat 6.6.
Although cluster is working as expected, I have some trace of old failures
(several monthes ago) I can't gert rid of.
Basically I have set cluster-recheck-interval=300 and
failure-timeout=600 (in rsc_defaults) as shown
://www.clusterlabs.org/wiki/Debian_Lenny_HowTo
Excuse me for the bad english.
Best Regards,
Alexandre
___
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
On Fri, Nov 20, 2009 at 2:53 PM, Matthew Palmer mpal...@hezmatt.org wrote:
On Fri, Nov 20, 2009 at 02:42:29PM -0200, Alexandre Biancalana wrote:
I'm building a 4 node cluster where 2 nodes will export drbd devices
via ietd iscsi target (storage nodes) and other 2 nodes will run xen
vm (app
On Fri, Nov 20, 2009 at 4:35 PM, Andrew Beekhof and...@beekhof.net wrote:
On Fri, Nov 20, 2009 at 5:42 PM, Alexandre Biancalana
biancal...@gmail.com wrote:
Hi list,
I'm building a 4 node cluster where 2 nodes will export drbd devices
via ietd iscsi target (storage nodes) and other 2 nodes
21 matches
Mail list logo