Hi
Thank you for your answer.
OK I understand, but this makes troubles for me... Example: When the
node holding the resource (and the constraint) reboots the resource is
not moving to the other node (because of this constraint, I see on the
debug logs no node can hold the resource). As soon as I
On 2013-12-06T09:00:32, Gaëtan Slongo gslo...@it-optics.com wrote:
OK I understand, but this makes troubles for me... Example: When the
node holding the resource (and the constraint) reboots the resource is
not moving to the other node (because of this constraint, I see on the
debug logs no
On 2013-12-06T08:55:47, Vladislav Bogdanov bub...@hoster-ok.com wrote:
BTW, pacemaker cib accepts any meta attributes (and that is very
convenient way for me to store some 'meta' information), while crmsh
limits them to a pre-defined list. While that is probably fine for
novices, that limits
Hi !
I know this is caused by the -inf but I don't explicitly created this
constraint ... Pacemaker did it himself... :-(
This constraint is also created when the resource moves automatically.
Then after a successfuly (and automatic) move the resource is
blocked on the current node until I
On 2013-12-06T09:54:19, Gaëtan Slongo gslo...@it-optics.com wrote:
I know this is caused by the -inf but I don't explicitly created this
constraint ... Pacemaker did it himself... :-(
No, it did this because you *asked it to*.
This constraint is also created when the resource moves
06.12.2013 11:41, Lars Marowsky-Bree wrote:
On 2013-12-06T08:55:47, Vladislav Bogdanov bub...@hoster-ok.com wrote:
BTW, pacemaker cib accepts any meta attributes (and that is very
convenient way for me to store some 'meta' information), while crmsh
limits them to a pre-defined list. While
Hi Vladislav,
I used the below advisory colocation but its not working.
On 3 node setup:
I have configured all 3 resources in clone mode to start only on node1 and
node2 with a fail-count of only 1.
+++
+ crm configure primitive res_dummy_1 lsb::dummy_1 meta
I installed crmsh and configured it via crm commands.
best regards,
m.
On 12/06/2013 12:05 PM, Bauer, Stefan (IZLBW Extern) wrote:
Any news on this? I'm facing the same issue.
Stefan
-Ursprüngliche Nachricht-
Von: Chris Feist [mailto:cfe...@redhat.com]
Gesendet: Dienstag, 3. Dezember
I have a resource which updates DNS records (Amazon's Route53). When it
performs it's `monitor` action, it can sometimes fail because of issues
with Amazon's API. So I want failures to be ignored for the monitor
action, and so I set `op monitor on-fail=ignore`. However now when the
monitor action
[ Hopefully this doesn't cause a duplicate post but my first attempt
returned an error. ]
Using pacemaker 1.1.10 (but I think this issue is more general than that
release), I want to enforce a policy that once a node fails, no
resources can be started/run on it until the user permits it.
I have
Greetings,
This is to announce version 0.6.2 of Hawk, a web-based GUI for managing
and monitoring Pacemaker High-Availability clusters.
Notable features include:
- View cluster status (summary and detailed views).
- Examine potential failure scenarios via simulator mode.
- History explorer for
Dear all
I would like to configure stonith and found example like this:
pcs cluster cib stonith_cfg
pcs -f stonith_cfg stonith
pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan
pcmk_host_list=sv2836 sv2837 ipaddr=10.0.0.1 login=testuser passwd=acd123 op
monitor interval=60s
pcs -f
Am Freitag, 6. Dezember 2013, 10:11:07 schrieb Patrick Hemmer:
I have a resource which updates DNS records (Amazon's Route53). When it
performs it's `monitor` action, it can sometimes fail because of issues
with Amazon's API. So I want failures to be ignored for the monitor
action, and so I
Am Freitag, 6. Dezember 2013, 16:49:32 schrieb Dvorak Andreas:
Dear all
I would like to configure stonith and found example like this:
pcs cluster cib stonith_cfg
pcs -f stonith_cfg stonith
pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan
pcmk_host_list=sv2836 sv2837
make two resources
pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan
pcmk_host_list=sv2836 ipaddr=10.0.0.1 login=testuser passwd=acd123 op
monitor interval=60s
pcs -f stonith_cfg stonith create impi-fencing fence_ipmilan
pcmk_host_list=sv2837 ipaddr=10.0.0.2 login=testuser
On 06/12/13 10:57, Michael Schwartzkopff wrote:
Am Freitag, 6. Dezember 2013, 16:49:32 schrieb Dvorak Andreas:
Dear all
I would like to configure stonith and found example like this:
pcs cluster cib stonith_cfg pcs -f stonith_cfg stonith pcs -f
stonith_cfg stonith create impi-fencing
*From: *Michael Schwartzkopff m...@sys4.de
*Sent: * 2013-12-06 11:16:17 E
*To: *pacemaker@oss.clusterlabs.org
*Subject: *Re: [Pacemaker] monitor on-fail=ignore not restarting when
resource reported as stopped
Am
On 2013-12-06T11:21:02, Patrick Hemmer pacema...@feystorm.net wrote:
So where is the problem? If the script returns ERROR than pacemaker has
to
acct accordingly.
If the script returns ERROR the `on-fail=ignore` should make it do
nothing. Amazon's API failed, we need to just retry again
*From: *Lars Marowsky-Bree l...@suse.com
*Sent: * 2013-12-06 13:44:53 E
*To: *The Pacemaker cluster resource manager pacemaker@oss.clusterlabs.org
*Subject: *Re: [Pacemaker] monitor on-fail=ignore not restarting when
*From: *Lars Marowsky-Bree l...@suse.com
*Sent: * 2013-12-06 13:44:53 E
*To: *The Pacemaker cluster resource manager pacemaker@oss.clusterlabs.org
*Subject: *Re: [Pacemaker] monitor on-fail=ignore not restarting when
I seem to have another instance where pacemaker fails to exit at the end
of a shutdown. Here's the log from the start of the service pacemaker
stop:
Dec 3 13:00:39 wtm-60vm8 crmd[14076]: notice: do_state_transition: State
transition S_POLICY_ENGINE - S_TRANSITION_ENGINE [ input=I_PE_SUCCESS
21 matches
Mail list logo