Am Donnerstag, 9. April 2015, 10:27:51 schrieben Sie:
(...)
why does pacemaker try to move VM to joining node?
...
(...)
rsc_location id=cli-ban-lnx0106a-on-lnx0129a node=lnx0129a
role=Started rsc=lnx0106a score=-INFINITY/
...
You orderd pacemaker to do so. Probably by a crm resource
Von:Michael Schwartzkopff m...@sys4.de
An: The Pacemaker cluster resource manager
pacemaker@oss.clusterlabs.org
Datum: 08.04.2015 17:12
Betreff:Re: [Pacemaker] update cib after fence
Am Mittwoch, 8. April 2015, 15:03:48 schrieb philipp.achmuel...@arz.at:
hi,
how to cleanup
hi,
how to cleanup cib from node after unexpected system halt?
failed node still thinks of running VirtualDomain resource, which is
already running on other node in cluster(sucessful takeover:
executing pcs cluster start -
Apr 8 13:41:10 lnx0083a daemon:info lnx0083a
hi,
any recommendation/documentation for a reliable fencing implementation on
a multi-node cluster (4 or 6 nodes on 2 site).
i think of implementing multiple node-fencing devices for each host to
stonith remaining nodes on other site?
thank you!
Philipp
hi,
Von:Dejan Muhamedagic deja...@fastmail.fm
An: The Pacemaker cluster resource manager
pacemaker@oss.clusterlabs.org
Datum: 28.10.2014 16:45
Betreff:Re: [Pacemaker] fencing with multiple node cluster
Hi,
On Tue, Oct 28, 2014 at 09:51:02AM -0400, Digimer wrote:
On
hi,
is it possible to set up different move types for VM?
- infinity colocation with pingd-clone - when failing on one node, live
migrate VM(s) to remaining nodes
- infinity colocation to LVM-clone - when failing on one node, cold
migration (stop/restart) VM(s) on remaining nodes
thank you!
Von:Andrew Beekhof and...@beekhof.net
An: The Pacemaker cluster resource manager
pacemaker@oss.clusterlabs.org
Datum: 30.07.2014 10:54
Betreff:Re: [Pacemaker] Pacemaker 1.1.12 - crm_mon email
notification
On 30 Jul 2014, at 6:08 pm, philipp.achmuel...@arz.at wrote:
hi,
any ideas on the unrunnable problem?
That's expected: one can't run operations on a node which is offline.
i would expect a failover of the resources to node lnx0047b. since
lnx0047a is stonith'ed, the resources should start on remaining node.
any ideas on the stonith problem?
We'd need full
i removed the clone, set the global cluster property for stonith-timeout.
the nodes need about 3-5 minutes to startup after they get shot
i did some more tests and found out that if the node, which runs resource
sbd_fence, get shot the remaining node see the stonith resource online
on both
hi,
following configuration:
node lnx0047a
node lnx0047b
primitive lnx0101a ocf:heartbeat:KVM \
params name=lnx0101a \
meta allow-migrate=1 target-role=Started \
op migrate_from interval=0 timeout=3600s \
op migrate_to interval=0 timeout=3600s \
op monitor
Ich bin ab 23.03.2010 nicht im Büro. Sie erreichen mich wieder am
25.03.2010.
Ich werde Ihre Nachricht nach meiner Rückkehr beantworten. In dringenden
Fällen wenden sie sich bitte an meinen Kollegen Sammer Bernhard (DW 1443)
bzw an die UNIX-Hotline DW 1444.
hi,
configuration and behavoir:
$ crm configure show
node lnx0012a \
attributes standby=off
node lnx0012b \
attributes standby=on
primitive pingd ocf:heartbeat:pingd \
params host_list=10.1.236.100 multiplier=100 \
op monitor interval=15s timeout=20s
primitive
12 matches
Mail list logo