Re: [Linux-HA] Problem with kvm virtual machine and cluster
On Thu, Aug 11, 2011 at 04:04:36PM +1000, Andrew Beekhof wrote: On Wed, Aug 10, 2011 at 11:15 PM, Maloja01 maloj...@arcor.de wrote: The order constraints do work as I assume, but I guess that you run into a pifall: A clone is marked as up, if one instance in the cluster is started successfully. The order does not say, that the clone on the same node must be up. Use a colocation constraint to have that Kind regards Fabian On 08/10/2011 01:43 PM, i...@umbertocarrara.it wrote: hi, excuse me for my poor english, i use google to help me in traslation and I am a newbie in clustering :-). I'm trying to start a cluster with tree nodes for virtualizzation, I have used a how-to that I found at http://www.linbit.com/support/ha-kvm.pdf to configure the cluster, volumes of vm are shared on openfiler cluster on iscsi that works well. vm start ok in hosts if I'm out of the cluster. The problem is that the vm start before libvirt and open-iscsi initiator I have set a order rule but seems wont work. after when services are started the cluster can not restart the machine so the output of crm_mon -1 is Last updated: Wed Aug 10 12:40:20 2011 Stack: openais Current DC: host1 - partition with quorum Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b 3 Nodes configured, 3 expected votes 2 Resources configured. Online: [ host1 host2 host3 ] Clone Set: BackEndClone Started: [ host1 host2 host3 ] Samba (ocf::heartbeat:VirtualDomain) Started [ host1 host2 host3 ] Failed actions: Samba_monitor_0 (node=host1, call=15, rc=1, status=complete): unknown error Samba_stop_0 (node=host1, call=16, rc=1, status=complete): unknown error Samba_monitor_0 (node=host2, call=12, rc=1, status=complete): unknown error Samba_stop_0 (node=host2, call=13, rc=1, status=complete): unknown error Samba_monitor_0 (node=host3, call=12, rc=1, status=complete): unknown error Samba_stop_0 (node=host3, call=13, rc=1, status=complete): unknown error this is my cluster config: root@host1:~# crm configure show node host1 \ attributes standby=on node host2 \ attributes standby=on node host3 \ attributes standby=on primitive Iscsi lsb:open-iscsi \ op monitor interval=30 primitive Samba ocf:heartbeat:VirtualDomain \ params config=/etc/libvirt/qemu/samba.iso.xml \ meta allow-migrate=true \ op monitor interval=30 primitive Virsh lsb:libvirt-bin \ op monitor interval=30 group BackEnd Iscsi Virsh clone BackEndClone BackEnd \ meta target-role=Started colocation SambaOnBackEndClone inf: Samba BackEndClone order SambaBeforeBackEndClone inf: BackEndClone Samba I think you want to reverse those to do what their id implies: colocation SambaOnBackEndClone inf: BackEndClone Samba order SambaBeforeBackEndClone inf: Samba BackEndClone property $id=cib-bootstrap-options \ dc-version=1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b \ cluster-infrastructure=openais \ expected-quorum-votes=3 \ stonith-enabled=false \ no-quorum-policy=ignore \ default-action-timeout=100 \ last-lrm-refresh=1312970592 rsc_defaults $id=rsc-options \ resource-stickiness=200 my log is: Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host1 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host1 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host2 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host2 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host3 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host3 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: native_merge_weights: BackEndClone: Rolling back scores from Samba Aug 10 13:36:34 host1 pengine: [1923]: info: native_color: Unmanaged resource Samba allocated to 'nowhere': failed Aug 10 13:36:34 host1 pengine: [1923]: WARN: native_create_actions: Attempting recovery of resource Samba Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:0 (Started host1) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:0 (Started host1) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:1 (Started host2) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:1 (Started host2)
Re: [Linux-HA] Problem with kvm virtual machine and cluster
On Wed, Aug 10, 2011 at 11:15 PM, Maloja01 maloj...@arcor.de wrote: The order constraints do work as I assume, but I guess that you run into a pifall: A clone is marked as up, if one instance in the cluster is started successfully. The order does not say, that the clone on the same node must be up. Use a colocation constraint to have that Kind regards Fabian On 08/10/2011 01:43 PM, i...@umbertocarrara.it wrote: hi, excuse me for my poor english, i use google to help me in traslation and I am a newbie in clustering :-). I'm trying to start a cluster with tree nodes for virtualizzation, I have used a how-to that I found at http://www.linbit.com/support/ha-kvm.pdf to configure the cluster, volumes of vm are shared on openfiler cluster on iscsi that works well. vm start ok in hosts if I'm out of the cluster. The problem is that the vm start before libvirt and open-iscsi initiator I have set a order rule but seems wont work. after when services are started the cluster can not restart the machine so the output of crm_mon -1 is Last updated: Wed Aug 10 12:40:20 2011 Stack: openais Current DC: host1 - partition with quorum Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b 3 Nodes configured, 3 expected votes 2 Resources configured. Online: [ host1 host2 host3 ] Clone Set: BackEndClone Started: [ host1 host2 host3 ] Samba (ocf::heartbeat:VirtualDomain) Started [ host1 host2 host3 ] Failed actions: Samba_monitor_0 (node=host1, call=15, rc=1, status=complete): unknown error Samba_stop_0 (node=host1, call=16, rc=1, status=complete): unknown error Samba_monitor_0 (node=host2, call=12, rc=1, status=complete): unknown error Samba_stop_0 (node=host2, call=13, rc=1, status=complete): unknown error Samba_monitor_0 (node=host3, call=12, rc=1, status=complete): unknown error Samba_stop_0 (node=host3, call=13, rc=1, status=complete): unknown error this is my cluster config: root@host1:~# crm configure show node host1 \ attributes standby=on node host2 \ attributes standby=on node host3 \ attributes standby=on primitive Iscsi lsb:open-iscsi \ op monitor interval=30 primitive Samba ocf:heartbeat:VirtualDomain \ params config=/etc/libvirt/qemu/samba.iso.xml \ meta allow-migrate=true \ op monitor interval=30 primitive Virsh lsb:libvirt-bin \ op monitor interval=30 group BackEnd Iscsi Virsh clone BackEndClone BackEnd \ meta target-role=Started colocation SambaOnBackEndClone inf: Samba BackEndClone order SambaBeforeBackEndClone inf: BackEndClone Samba property $id=cib-bootstrap-options \ dc-version=1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b \ cluster-infrastructure=openais \ expected-quorum-votes=3 \ stonith-enabled=false \ no-quorum-policy=ignore \ default-action-timeout=100 \ last-lrm-refresh=1312970592 rsc_defaults $id=rsc-options \ resource-stickiness=200 my log is: Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host1 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host1 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host2 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host2 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host3 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host3 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: native_merge_weights: BackEndClone: Rolling back scores from Samba Aug 10 13:36:34 host1 pengine: [1923]: info: native_color: Unmanaged resource Samba allocated to 'nowhere': failed Aug 10 13:36:34 host1 pengine: [1923]: WARN: native_create_actions: Attempting recovery of resource Samba Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:0 (Started host1) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:0 (Started host1) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:1 (Started host2) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:1 (Started host2) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:2 (Started host3) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:2 (Started host3) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Samba (Started unmanaged) ___ Linux-HA mailing list
[Linux-HA] Problem with kvm virtual machine and cluster
hi, excuse me for my poor english, i use google to help me in traslation and I am a newbie in clustering :-). I'm trying to start a cluster with tree nodes for virtualizzation, I have used a how-to that I found at http://www.linbit.com/support/ha-kvm.pdf to configure the cluster, volumes of vm are shared on openfiler cluster on iscsi that works well. vm start ok in hosts if I'm out of the cluster. The problem is that the vm start before libvirt and open-iscsi initiator I have set a order rule but seems wont work. after when services are started the cluster can not restart the machine so the output of crm_mon -1 is Last updated: Wed Aug 10 12:40:20 2011 Stack: openais Current DC: host1 - partition with quorum Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b 3 Nodes configured, 3 expected votes 2 Resources configured. Online: [ host1 host2 host3 ] Clone Set: BackEndClone Started: [ host1 host2 host3 ] Samba (ocf::heartbeat:VirtualDomain) Started [host1 host2 host3 ] Failed actions: Samba_monitor_0 (node=host1, call=15, rc=1, status=complete): unknown error Samba_stop_0 (node=host1, call=16, rc=1, status=complete): unknown error Samba_monitor_0 (node=host2, call=12, rc=1, status=complete): unknown error Samba_stop_0 (node=host2, call=13, rc=1, status=complete): unknown error Samba_monitor_0 (node=host3, call=12, rc=1, status=complete): unknown error Samba_stop_0 (node=host3, call=13, rc=1, status=complete): unknown error this is my cluster config: root@host1:~# crm configure show node host1 \ attributes standby=on node host2 \ attributes standby=on node host3 \ attributes standby=on primitive Iscsi lsb:open-iscsi \ op monitor interval=30 primitive Samba ocf:heartbeat:VirtualDomain \ params config=/etc/libvirt/qemu/samba.iso.xml \ meta allow-migrate=true \ op monitor interval=30 primitive Virsh lsb:libvirt-bin \ op monitor interval=30 group BackEnd Iscsi Virsh clone BackEndClone BackEnd \ meta target-role=Started colocation SambaOnBackEndClone inf: Samba BackEndClone order SambaBeforeBackEndClone inf: BackEndClone Samba property $id=cib-bootstrap-options \ dc-version=1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b \ cluster-infrastructure=openais \ expected-quorum-votes=3 \ stonith-enabled=false \ no-quorum-policy=ignore \ default-action-timeout=100 \ last-lrm-refresh=1312970592 rsc_defaults $id=rsc-options \ resource-stickiness=200 my log is: Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host1 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host1 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host2 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host2 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host3 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host3 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: native_merge_weights: BackEndClone: Rolling back scores from Samba Aug 10 13:36:34 host1 pengine: [1923]: info: native_color: Unmanaged resource Samba allocated to 'nowhere': failed Aug 10 13:36:34 host1 pengine: [1923]: WARN: native_create_actions: Attempting recovery of resource Samba Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:0 (Started host1) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:0 (Started host1) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:1 (Started host2) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:1 (Started host2) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:2 (Started host3) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:2 (Started host3) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Samba (Started unmanaged) -- -- mediaus around the web Via Romana, 2143 - 55100 Antraccoli - LUCCA Web: http://www.mediaus.it Tel.: +39 (0) 583 493745/464501 Mob. +39 349.5422881 Fax.: +39 (0) 583 471420 Mailto:i...@mediaus.it ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
Re: [Linux-HA] Problem with kvm virtual machine and cluster
The order constraints do work as I assume, but I guess that you run into a pifall: A clone is marked as up, if one instance in the cluster is started successfully. The order does not say, that the clone on the same node must be up. Kind regards Fabian On 08/10/2011 01:43 PM, i...@umbertocarrara.it wrote: hi, excuse me for my poor english, i use google to help me in traslation and I am a newbie in clustering :-). I'm trying to start a cluster with tree nodes for virtualizzation, I have used a how-to that I found at http://www.linbit.com/support/ha-kvm.pdf to configure the cluster, volumes of vm are shared on openfiler cluster on iscsi that works well. vm start ok in hosts if I'm out of the cluster. The problem is that the vm start before libvirt and open-iscsi initiator I have set a order rule but seems wont work. after when services are started the cluster can not restart the machine so the output of crm_mon -1 is Last updated: Wed Aug 10 12:40:20 2011 Stack: openais Current DC: host1 - partition with quorum Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b 3 Nodes configured, 3 expected votes 2 Resources configured. Online: [ host1 host2 host3 ] Clone Set: BackEndClone Started: [ host1 host2 host3 ] Samba (ocf::heartbeat:VirtualDomain) Started [host1 host2 host3 ] Failed actions: Samba_monitor_0 (node=host1, call=15, rc=1, status=complete): unknown error Samba_stop_0 (node=host1, call=16, rc=1, status=complete): unknown error Samba_monitor_0 (node=host2, call=12, rc=1, status=complete): unknown error Samba_stop_0 (node=host2, call=13, rc=1, status=complete): unknown error Samba_monitor_0 (node=host3, call=12, rc=1, status=complete): unknown error Samba_stop_0 (node=host3, call=13, rc=1, status=complete): unknown error this is my cluster config: root@host1:~# crm configure show node host1 \ attributes standby=on node host2 \ attributes standby=on node host3 \ attributes standby=on primitive Iscsi lsb:open-iscsi \ op monitor interval=30 primitive Samba ocf:heartbeat:VirtualDomain \ params config=/etc/libvirt/qemu/samba.iso.xml \ meta allow-migrate=true \ op monitor interval=30 primitive Virsh lsb:libvirt-bin \ op monitor interval=30 group BackEnd Iscsi Virsh clone BackEndClone BackEnd \ meta target-role=Started colocation SambaOnBackEndClone inf: Samba BackEndClone order SambaBeforeBackEndClone inf: BackEndClone Samba property $id=cib-bootstrap-options \ dc-version=1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b \ cluster-infrastructure=openais \ expected-quorum-votes=3 \ stonith-enabled=false \ no-quorum-policy=ignore \ default-action-timeout=100 \ last-lrm-refresh=1312970592 rsc_defaults $id=rsc-options \ resource-stickiness=200 my log is: Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host1 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host1 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host2 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host2 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: get_failcount: Samba has failed INFINITY times on host3 Aug 10 13:36:34 host1 pengine: [1923]: WARN: common_apply_stickiness: Forcing Samba away from host3 after 100 failures (max=100) Aug 10 13:36:34 host1 pengine: [1923]: info: native_merge_weights: BackEndClone: Rolling back scores from Samba Aug 10 13:36:34 host1 pengine: [1923]: info: native_color: Unmanaged resource Samba allocated to 'nowhere': failed Aug 10 13:36:34 host1 pengine: [1923]: WARN: native_create_actions: Attempting recovery of resource Samba Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:0 (Started host1) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:0 (Started host1) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:1 (Started host2) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:1 (Started host2) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Iscsi:2 (Started host3) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Virsh:2 (Started host3) Aug 10 13:36:34 host1 pengine: [1923]: notice: LogActions: Leave resource Samba (Started unmanaged) ___ Linux-HA mailing list Linux-HA@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha See