Re: [Linux-HA] remove resource WITHOUT moving the other resources

2011-09-22 Thread Andrew Beekhof
On Sun, Aug 14, 2011 at 11:04 PM, Julian D. Seifert
ala...@julian-seifert.de wrote:
 Hi,

 Thank you for your response, I have some follow-up questions.

 Now what I am looking for is a way to completely delete/remove
 openvzve_itv without affecting the other resources.

 is-managed-default=false and/or resource-stickiness=INFINITY

 When trying to use the first method, setting is-managed-default to
 false, how would I go on about that? I think I'd have to crm configure
 property is-managed-default=false

Right

 (How would I list all set properties?)

crm configure show

 After that I should be able to delete the stuff from the
 configuration without it causing pacemaker to act upon it, right?

Right

 I
 noticed that only deleting the group entry of
 openvzve_itv causes the cluster to want to move all resources to the
 other node. (Even after setting said property)

That doesn't sound right.  Can you send logs if you still have them?

 I'm curious as to why the cluster would decide to do this in the first
 place, how he decides that this would be a good idea.

 regards,

  Julian D. Seifert

 --



 ___
 Linux-HA mailing list
 Linux-HA@lists.linux-ha.org
 http://lists.linux-ha.org/mailman/listinfo/linux-ha
 See also: http://linux-ha.org/ReportingProblems

___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] remove resource WITHOUT moving the other resources

2011-08-14 Thread Julian D. Seifert
Hi,

Thank you for your response, I have some follow-up questions.

 Now what I am looking for is a way to completely delete/remove
 openvzve_itv without affecting the other resources.
 
 is-managed-default=false and/or resource-stickiness=INFINITY
 
When trying to use the first method, setting is-managed-default to
false, how would I go on about that? I think I'd have to crm configure
property is-managed-default=false (How would I list all set properties?)
After that I should be able to delete the stuff from the
configuration without it causing pacemaker to act upon it, right? I
noticed that only deleting the group entry of
openvzve_itv causes the cluster to want to move all resources to the
other node. (Even after setting said property)
I'm curious as to why the cluster would decide to do this in the first
place, how he decides that this would be a good idea.

regards,

  Julian D. Seifert

-- 




signature.asc
Description: OpenPGP digital signature
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Re: [Linux-HA] remove resource WITHOUT moving the other resources

2011-08-11 Thread Andrew Beekhof
On Thu, Aug 11, 2011 at 5:28 AM,  ala...@julian-seifert.de wrote:
 Hi List,

 I have a little problem with my 2 node pacemaker cluster. (Active/Passive
 Setup). vpsnode01-rz (current active node) and vpsnode01-nk (passive)
 It's a bunch of OpenVZ containers grouped together and colocated to where
 the DRBD-Master resource is running. I have a problem with a newly created
 Container openvzve_itv. 1. It's recognized as running on the passive
 node. (Which it is NOT and that wouldn't be possible anyway as the VEs are
 located on the DRBD storage backends)

Then the agent is broken.

 2. I am NOT able to remove the openvzve_itv resource WITHOUT ptest
 suggesting that EVERY other resource will switch nodes. To clarify what I
 just describe I pasted the crm_mon output and the current configuration:
 http://pastebin.ubuntu.com/653023/

 Now what I am looking for is a way to completely delete/remove
 openvzve_itv without affecting the other resources.

is-managed-default=false and/or resource-stickiness=INFINITY

 I tried deleting
 everything in the configuration (crm configuration edit) related to
 openvzve_itv (namely the resource itself and its entry in the openvz
 group) but as I mentioned earlier when running ptest vvv nograph after
 that and before commiting of course it suggests that ALL resources will
 get moved.

 PTEST:

 crm(live)configure# ptest vvv nograph
 ptest[7623]: 2011/08/10_21:27:50 notice: unpack_config: On loss of CCM
 Quorum: Ignore
 ptest[7623]: 2011/08/10_21:27:50 notice: unpack_rsc_op: Hard error -
 openvzve_itv_start_0 failed with rc=5: Preventing openvzve_itv from
 re-starting on vpsnode01-nk
 ptest[7623]: 2011/08/10_21:27:50 WARN: unpack_rsc_op: Processing failed op
 openvzve_itv_start_0 on vpsnode01-nk: not installed (5)
 ptest[7623]: 2011/08/10_21:27:50 WARN: unpack_rsc_op: Processing failed op
 openvzve_itv_stop_0 on vpsnode01-nk: unknown error (1)
 ptest[7623]: 2011/08/10_21:27:50 notice: clone_print:  Clone Set: connected
 ptest[7623]: 2011/08/10_21:27:50 notice: short_print:      Started: [
 vpsnode01-nk vpsnode01-rz ]
 ptest[7623]: 2011/08/10_21:27:50 notice: group_print:  Resource Group: openvz
 ptest[7623]: 2011/08/10_21:27:50 notice: native_print:      fs_openvz
 (ocf::heartbeat:Filesystem):    Started vpsnode01-rz
 ptest[7623]: 2011/08/10_21:27:50 notice: native_print:      ip_openvz
 (ocf::heartbeat:IPaddr2):       Started vpsnode01-rz
 ptest[7623]: 2011/08/10_21:27:50 notice: native_print:
 openvzve_plesk953   (ocf::heartbeat:ManageVE):      Started vpsnode01-rz
 ptest[7623]: 2011/08/10_21:27:50 notice: native_print:
 openvzve_stream3    (ocf::heartbeat:ManageVE):      Started vpsnode01-rz
 ptest[7623]: 2011/08/10_21:27:50 notice: native_print:      openvzve_mail
     (ocf::heartbeat:ManageVE):      Started vpsnode01-rz
 ptest[7623]: 2011/08/10_21:27:50 notice: native_print:      openvzve_dns1
     (ocf::heartbeat:ManageVE):      Started vpsnode01-rz
 ptest[7623]: 2011/08/10_21:27:50 notice: native_print: openvzve_itv
 (ocf::heartbeat:ManageVE):      Started vpsnode01-nk (unmanaged) FAILED
 ptest[7623]: 2011/08/10_21:27:50 notice: clone_print:  Master/Slave Set:
 ms_drbd_openvz
 ptest[7623]: 2011/08/10_21:27:50 notice: short_print:      Masters: [
 vpsnode01-rz ]
 ptest[7623]: 2011/08/10_21:27:50 notice: short_print:      Slaves: [
 vpsnode01-nk ]
 ptest[7623]: 2011/08/10_21:27:50 notice: common_apply_stickiness:
 openvzve_plesk953 can fail 99 more times on vpsnode01-rz before being
 forced off
 ptest[7623]: 2011/08/10_21:27:50 WARN: common_apply_stickiness: Forcing
 openvzve_itv away from vpsnode01-nk after 100 failures (max=100)
 ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
 monitor (10s) for openvzve_plesk953 on vpsnode01-nk
 ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
 monitor (10s) for openvzve_stream3 on vpsnode01-nk
 ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
 monitor (10s) for openvzve_mail on vpsnode01-nk
 ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
 monitor (10s) for openvzve_dns1 on vpsnode01-nk
 ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
 monitor (20s) for drbd_openvz:1 on vpsnode01-rz
 ptest[7623]: 2011/08/10_21:27:50 ERROR: create_notification_boundaries:
 Creating boundaries for ms_drbd_openvz
 ptest[7623]: 2011/08/10_21:27:50 ERROR: create_notification_boundaries:
 Creating boundaries for ms_drbd_openvz
 ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
 monitor (20s) for drbd_openvz:1 on vpsnode01-rz
 ptest[7623]: 2011/08/10_21:27:50 ERROR: create_notification_boundaries:
 Creating boundaries for ms_drbd_openvz
 ptest[7623]: 2011/08/10_21:27:50 ERROR: create_notification_boundaries:
 Creating boundaries for ms_drbd_openvz
 ptest[7623]: 2011/08/10_21:27:50 notice: LogActions: Leave resource ping:0
     (Started vpsnode01-nk)
 ptest[7623]: 2011/08/10_21:27:50 notice: LogActions: Leave 

[Linux-HA] remove resource WITHOUT moving the other resources

2011-08-10 Thread alamar
Hi List,

I have a little problem with my 2 node pacemaker cluster. (Active/Passive
Setup). vpsnode01-rz (current active node) and vpsnode01-nk (passive)
It's a bunch of OpenVZ containers grouped together and colocated to where
the DRBD-Master resource is running. I have a problem with a newly created
Container openvzve_itv. 1. It's recognized as running on the passive
node. (Which it is NOT and that wouldn't be possible anyway as the VEs are
located on the DRBD storage backends)
2. I am NOT able to remove the openvzve_itv resource WITHOUT ptest
suggesting that EVERY other resource will switch nodes. To clarify what I
just describe I pasted the crm_mon output and the current configuration:
http://pastebin.ubuntu.com/653023/

Now what I am looking for is a way to completely delete/remove
openvzve_itv without affecting the other resources. I tried deleting
everything in the configuration (crm configuration edit) related to
openvzve_itv (namely the resource itself and its entry in the openvz
group) but as I mentioned earlier when running ptest vvv nograph after
that and before commiting of course it suggests that ALL resources will
get moved.

PTEST:

crm(live)configure# ptest vvv nograph
ptest[7623]: 2011/08/10_21:27:50 notice: unpack_config: On loss of CCM
Quorum: Ignore
ptest[7623]: 2011/08/10_21:27:50 notice: unpack_rsc_op: Hard error -
openvzve_itv_start_0 failed with rc=5: Preventing openvzve_itv from
re-starting on vpsnode01-nk
ptest[7623]: 2011/08/10_21:27:50 WARN: unpack_rsc_op: Processing failed op
openvzve_itv_start_0 on vpsnode01-nk: not installed (5)
ptest[7623]: 2011/08/10_21:27:50 WARN: unpack_rsc_op: Processing failed op
openvzve_itv_stop_0 on vpsnode01-nk: unknown error (1)
ptest[7623]: 2011/08/10_21:27:50 notice: clone_print:  Clone Set: connected
ptest[7623]: 2011/08/10_21:27:50 notice: short_print:  Started: [
vpsnode01-nk vpsnode01-rz ]
ptest[7623]: 2011/08/10_21:27:50 notice: group_print:  Resource Group: openvz
ptest[7623]: 2011/08/10_21:27:50 notice: native_print:  fs_openvz  
(ocf::heartbeat:Filesystem):Started vpsnode01-rz
ptest[7623]: 2011/08/10_21:27:50 notice: native_print:  ip_openvz  
(ocf::heartbeat:IPaddr2):   Started vpsnode01-rz
ptest[7623]: 2011/08/10_21:27:50 notice: native_print: 
openvzve_plesk953   (ocf::heartbeat:ManageVE):  Started vpsnode01-rz
ptest[7623]: 2011/08/10_21:27:50 notice: native_print: 
openvzve_stream3(ocf::heartbeat:ManageVE):  Started vpsnode01-rz
ptest[7623]: 2011/08/10_21:27:50 notice: native_print:  openvzve_mail 
 (ocf::heartbeat:ManageVE):  Started vpsnode01-rz
ptest[7623]: 2011/08/10_21:27:50 notice: native_print:  openvzve_dns1 
 (ocf::heartbeat:ManageVE):  Started vpsnode01-rz
ptest[7623]: 2011/08/10_21:27:50 notice: native_print: openvzve_itv
(ocf::heartbeat:ManageVE):  Started vpsnode01-nk (unmanaged) FAILED
ptest[7623]: 2011/08/10_21:27:50 notice: clone_print:  Master/Slave Set:
ms_drbd_openvz
ptest[7623]: 2011/08/10_21:27:50 notice: short_print:  Masters: [
vpsnode01-rz ]
ptest[7623]: 2011/08/10_21:27:50 notice: short_print:  Slaves: [
vpsnode01-nk ]
ptest[7623]: 2011/08/10_21:27:50 notice: common_apply_stickiness:
openvzve_plesk953 can fail 99 more times on vpsnode01-rz before being
forced off
ptest[7623]: 2011/08/10_21:27:50 WARN: common_apply_stickiness: Forcing
openvzve_itv away from vpsnode01-nk after 100 failures (max=100)
ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
monitor (10s) for openvzve_plesk953 on vpsnode01-nk
ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
monitor (10s) for openvzve_stream3 on vpsnode01-nk
ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
monitor (10s) for openvzve_mail on vpsnode01-nk
ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
monitor (10s) for openvzve_dns1 on vpsnode01-nk
ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
monitor (20s) for drbd_openvz:1 on vpsnode01-rz
ptest[7623]: 2011/08/10_21:27:50 ERROR: create_notification_boundaries:
Creating boundaries for ms_drbd_openvz
ptest[7623]: 2011/08/10_21:27:50 ERROR: create_notification_boundaries:
Creating boundaries for ms_drbd_openvz
ptest[7623]: 2011/08/10_21:27:50 notice: RecurringOp:  Start recurring
monitor (20s) for drbd_openvz:1 on vpsnode01-rz
ptest[7623]: 2011/08/10_21:27:50 ERROR: create_notification_boundaries:
Creating boundaries for ms_drbd_openvz
ptest[7623]: 2011/08/10_21:27:50 ERROR: create_notification_boundaries:
Creating boundaries for ms_drbd_openvz
ptest[7623]: 2011/08/10_21:27:50 notice: LogActions: Leave resource ping:0
 (Started vpsnode01-nk)
ptest[7623]: 2011/08/10_21:27:50 notice: LogActions: Leave resource ping:1
 (Started vpsnode01-rz)
ptest[7623]: 2011/08/10_21:27:50 notice: LogActions: Move resource
fs_openvz(Started vpsnode01-rz - vpsnode01-nk)
ptest[7623]: 2011/08/10_21:27:50 notice: LogActions: Move