Re: [ClusterLabs] group resources without order behavior / monitor timeout smaller than interval?

2016-09-20 Thread Dejan Muhamedagic
Hi,

On Wed, Sep 14, 2016 at 02:41:10PM -0500, Ken Gaillot wrote:
> On 09/14/2016 03:01 AM, Stefan Bauer wrote:
> > Hi,
> > 
> > I'm trying to understand some cluster internals and would be happy to
> > get some best practice recommendations:
> > 
> > monitor interval and timeout: shouldn't timeout value always be smaller
> > than interval to avoid another check even though the first is not over yet?
> 
> The cluster handles it intelligently. If the previous monitor is still
> in progress when the interval expires, it won't run another one.

The lrmd which got replaced would schedule the next monitor
operation only once the current monitor operation finished, hence
the timeout value was essentialy irrelevant. Is that still the
case with the new lrmd?

> It certainly makes sense that the timeout will generally be smaller than
> the interval, but there may be cases where a monitor on rare occasions
> takes a long time, and the user wants the high timeout for those
> occasions, but a shorter interval that will be used most of the time.

Just to add that there's a tendency to make monitor intervals
quite short, often without taking a good look at the nature of
the resource.

Thanks,

Dejan

> > Additionally i would like to use the group function to put all my VMS
> > (ocf:heartbeat:VirtualDomain) in one group and colocate the group with
> > the VIP and my LVM-volume. Unfortunately group function starts the
> > resources in the listed order. So if i stop one VM, the following VMs
> > are also stopped.
> > 
> > Right now I'm having the following configuration and want to make it
> > less redundant:
> 
> You can use one ordering constraint and one colocation constraint, each
> with two resource sets, one containing the IP and volume with
> sequential=true, and the other containing the VMs with sequential=false.
> See:
> 
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-resource-sets
> 
> 
> > 
> > # never let the stonith_service run on the host to stonith
> > 
> > location l_st_srv20 st_ipmi_srv20 -inf: srv20
> > location l_st_srv21 st_ipmi_srv21 -inf: srv21
> > 
> > 
> > # do not run resources on quorum only node
> > location loc_r_lvm_vg-storage_quorum_only_node r_lvm_vg-storage -inf:
> > quorum_only_node
> > location loc_r_vm_ado01_quorum_only_node r_vm_ado01 -inf: quorum_only_node
> > location loc_r_vm_bar01_quorum_only_node r_vm_bar01 -inf: quorum_only_node
> > location loc_r_vm_cmt01_quorum_only_node r_vm_cmt01 -inf: quorum_only_node
> > location loc_r_vm_con01_quorum_only_node r_vm_con01 -inf: quorum_only_node
> > location loc_r_vm_con02_quorum_only_node r_vm_con02 -inf: quorum_only_node
> > location loc_r_vm_dsm01_quorum_only_node r_vm_dsm01 -inf: quorum_only_node
> > location loc_r_vm_jir01_quorum_only_node r_vm_jir01 -inf: quorum_only_node
> > location loc_r_vm_jir02_quorum_only_node r_vm_jir02 -inf: quorum_only_node
> > location loc_r_vm_prx02_quorum_only_node r_vm_prx02 -inf: quorum_only_node
> > location loc_r_vm_src01_quorum_only_node r_vm_src01 -inf: quorum_only_node
> > 
> > 
> > # colocate ip with lvm storage
> > colocation col_r_Failover_IP_r_lvm_vg-storage inf: r_Failover_IP
> > r_lvm_vg-storage
> > 
> > 
> > # colocate each VM with lvm storage
> > colocation col_r_vm_ado01_r_lvm_vg-storage inf: r_vm_ado01 r_lvm_vg-storage
> > colocation col_r_vm_bar01_r_lvm_vg-storage inf: r_vm_bar01 r_lvm_vg-storage
> > colocation col_r_vm_cmt01_r_lvm_vg-storage inf: r_vm_cmt01 r_lvm_vg-storage
> > colocation col_r_vm_con01_r_lvm_vg-storage inf: r_vm_jir01 r_lvm_vg-storage
> > colocation col_r_vm_con02_r_lvm_vg-storage inf: r_vm_con02 r_lvm_vg-storage
> > colocation col_r_vm_dsm01_r_lvm_vg-storage inf: r_vm_dsm01 r_lvm_vg-storage
> > colocation col_r_vm_jir01_r_lvm_vg-storage inf: r_vm_con01 r_lvm_vg-storage
> > colocation col_r_vm_jir02_r_lvm_vg-storage inf: r_vm_jir02 r_lvm_vg-storage
> > colocation col_r_vm_prx02_r_lvm_vg-storage inf: r_vm_prx02 r_lvm_vg-storage
> > colocation col_r_vm_src01_r_lvm_vg-storage inf: r_vm_src01 r_lvm_vg-storage
> > 
> > # start lvm storage before VIP
> > 
> > order ord_r_lvm_vg-storage_r_Failover_IP inf: r_lvm_vg-storage r_Failover_IP
> > 
> > 
> > # start lvm storage before each VM
> > order ord_r_lvm_vg-storage_r_vm_ado01 inf: r_lvm_vg-storage r_vm_ado01
> > order ord_r_lvm_vg-storage_r_vm_bar01 inf: r_lvm_vg-storage r_vm_bar01
> > order ord_r_lvm_vg-storage_r_vm_cmt01 inf: r_lvm_vg-storage r_vm_cmt01
> > order ord_r_lvm_vg-storage_r_vm_con01 inf: r_lvm_vg-storage r_vm_con01
> > order ord_r_lvm_vg-storage_r_vm_con02 inf: r_lvm_vg-storage r_vm_con02
> > order ord_r_lvm_vg-storage_r_vm_dsm01 inf: r_lvm_vg-storage r_vm_dsm01
> > order ord_r_lvm_vg-storage_r_vm_jir01 inf: r_lvm_vg-storage r_vm_jir01
> > order ord_r_lvm_vg-storage_r_vm_jir02 inf: r_lvm_vg-storage r_vm_jir02
> > order ord_r_lvm_vg-storage_r_vm_prx02 inf: r_lvm_vg-storage r_vm_prx02
> > order ord_r_lvm_vg-storage_r_vm_src01 inf: r_lvm_vg-storage r_vm_src01
> > 
> > 

Re: [ClusterLabs] group resources without order behavior / monitor timeout smaller than interval?

2016-09-15 Thread Stefan Bauer
Hi Ken,



thank you very much. I will test it!



Cheers,



Stefan



-Ursprüngliche Nachricht-
Von: Ken Gaillot <kgail...@redhat.com>
Gesendet: Mit 14 September 2016 21:59
An: users@clusterlabs.org
Betreff: Re: [ClusterLabs] group resources without order behavior / monitor 
timeout smaller than interval?


On 09/14/2016 03:01 AM, Stefan Bauer wrote:
> Hi,
> 
> I'm trying to understand some cluster internals and would be happy to
> get some best practice recommendations:
> 
> monitor interval and timeout: shouldn't timeout value always be smaller
> than interval to avoid another check even though the first is not over yet?

The cluster handles it intelligently. If the previous monitor is still
in progress when the interval expires, it won't run another one.

It certainly makes sense that the timeout will generally be smaller than
the interval, but there may be cases where a monitor on rare occasions
takes a long time, and the user wants the high timeout for those
occasions, but a shorter interval that will be used most of the time.

> Additionally i would like to use the group function to put all my VMS
> (ocf:heartbeat:VirtualDomain) in one group and colocate the group with
> the VIP and my LVM-volume. Unfortunately group function starts the
> resources in the listed order. So if i stop one VM, the following VMs
> are also stopped.
> 
> Right now I'm having the following configuration and want to make it
> less redundant:

You can use one ordering constraint and one colocation constraint, each
with two resource sets, one containing the IP and volume with
sequential=true, and the other containing the VMs with sequential=false.
See:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-resource-sets


> 
> # never let the stonith_service run on the host to stonith
> 
> location l_st_srv20 st_ipmi_srv20 -inf: srv20
> location l_st_srv21 st_ipmi_srv21 -inf: srv21
> 
> 
> # do not run resources on quorum only node
> location loc_r_lvm_vg-storage_quorum_only_node r_lvm_vg-storage -inf:
> quorum_only_node
> location loc_r_vm_ado01_quorum_only_node r_vm_ado01 -inf: quorum_only_node
> location loc_r_vm_bar01_quorum_only_node r_vm_bar01 -inf: quorum_only_node
> location loc_r_vm_cmt01_quorum_only_node r_vm_cmt01 -inf: quorum_only_node
> location loc_r_vm_con01_quorum_only_node r_vm_con01 -inf: quorum_only_node
> location loc_r_vm_con02_quorum_only_node r_vm_con02 -inf: quorum_only_node
> location loc_r_vm_dsm01_quorum_only_node r_vm_dsm01 -inf: quorum_only_node
> location loc_r_vm_jir01_quorum_only_node r_vm_jir01 -inf: quorum_only_node
> location loc_r_vm_jir02_quorum_only_node r_vm_jir02 -inf: quorum_only_node
> location loc_r_vm_prx02_quorum_only_node r_vm_prx02 -inf: quorum_only_node
> location loc_r_vm_src01_quorum_only_node r_vm_src01 -inf: quorum_only_node
> 
> 
> # colocate ip with lvm storage
> colocation col_r_Failover_IP_r_lvm_vg-storage inf: r_Failover_IP
> r_lvm_vg-storage
> 
> 
> # colocate each VM with lvm storage
> colocation col_r_vm_ado01_r_lvm_vg-storage inf: r_vm_ado01 r_lvm_vg-storage
> colocation col_r_vm_bar01_r_lvm_vg-storage inf: r_vm_bar01 r_lvm_vg-storage
> colocation col_r_vm_cmt01_r_lvm_vg-storage inf: r_vm_cmt01 r_lvm_vg-storage
> colocation col_r_vm_con01_r_lvm_vg-storage inf: r_vm_jir01 r_lvm_vg-storage
> colocation col_r_vm_con02_r_lvm_vg-storage inf: r_vm_con02 r_lvm_vg-storage
> colocation col_r_vm_dsm01_r_lvm_vg-storage inf: r_vm_dsm01 r_lvm_vg-storage
> colocation col_r_vm_jir01_r_lvm_vg-storage inf: r_vm_con01 r_lvm_vg-storage
> colocation col_r_vm_jir02_r_lvm_vg-storage inf: r_vm_jir02 r_lvm_vg-storage
> colocation col_r_vm_prx02_r_lvm_vg-storage inf: r_vm_prx02 r_lvm_vg-storage
> colocation col_r_vm_src01_r_lvm_vg-storage inf: r_vm_src01 r_lvm_vg-storage
> 
> # start lvm storage before VIP
> 
> order ord_r_lvm_vg-storage_r_Failover_IP inf: r_lvm_vg-storage r_Failover_IP
> 
> 
> # start lvm storage before each VM
> order ord_r_lvm_vg-storage_r_vm_ado01 inf: r_lvm_vg-storage r_vm_ado01
> order ord_r_lvm_vg-storage_r_vm_bar01 inf: r_lvm_vg-storage r_vm_bar01
> order ord_r_lvm_vg-storage_r_vm_cmt01 inf: r_lvm_vg-storage r_vm_cmt01
> order ord_r_lvm_vg-storage_r_vm_con01 inf: r_lvm_vg-storage r_vm_con01
> order ord_r_lvm_vg-storage_r_vm_con02 inf: r_lvm_vg-storage r_vm_con02
> order ord_r_lvm_vg-storage_r_vm_dsm01 inf: r_lvm_vg-storage r_vm_dsm01
> order ord_r_lvm_vg-storage_r_vm_jir01 inf: r_lvm_vg-storage r_vm_jir01
> order ord_r_lvm_vg-storage_r_vm_jir02 inf: r_lvm_vg-storage r_vm_jir02
> order ord_r_lvm_vg-storage_r_vm_prx02 inf: r_lvm_vg-storage r_vm_prx02
> order ord_r_lvm_vg-storage_r_vm_src01 inf: r_lvm_vg-storage r_vm_src01
> 
> any help is greatly appreciated.
> 
> thank you.
> 
>

[ClusterLabs] group resources without order behavior / monitor timeout smaller than interval?

2016-09-14 Thread Stefan Bauer
Hi,

I'm trying to understand some cluster internals and would be happy to get some 
best practice recommendations:


monitor interval and timeout: shouldn't timeout value always be smaller than 
interval to avoid another check even though the first is not over yet?

Additionally i would like to use the group function to put all my VMS 
(ocf:heartbeat:VirtualDomain) in one group and colocate the group with the VIP 
and my LVM-volume. Unfortunately group function starts the resources in the 
listed order. So if i stop one VM, the following VMs are also stopped.

Right now I'm having the following configuration and want to make it less 
redundant:


# never let the stonith_service run on the host to stonith

location l_st_srv20 st_ipmi_srv20 -inf: srv20
location l_st_srv21 st_ipmi_srv21 -inf: srv21


# do not run resources on quorum only node
location loc_r_lvm_vg-storage_quorum_only_node r_lvm_vg-storage -inf: 
quorum_only_node
location loc_r_vm_ado01_quorum_only_node r_vm_ado01 -inf: quorum_only_node
location loc_r_vm_bar01_quorum_only_node r_vm_bar01 -inf: quorum_only_node
location loc_r_vm_cmt01_quorum_only_node r_vm_cmt01 -inf: quorum_only_node
location loc_r_vm_con01_quorum_only_node r_vm_con01 -inf: quorum_only_node
location loc_r_vm_con02_quorum_only_node r_vm_con02 -inf: quorum_only_node
location loc_r_vm_dsm01_quorum_only_node r_vm_dsm01 -inf: quorum_only_node
location loc_r_vm_jir01_quorum_only_node r_vm_jir01 -inf: quorum_only_node
location loc_r_vm_jir02_quorum_only_node r_vm_jir02 -inf: quorum_only_node
location loc_r_vm_prx02_quorum_only_node r_vm_prx02 -inf: quorum_only_node
location loc_r_vm_src01_quorum_only_node r_vm_src01 -inf: quorum_only_node


# colocate ip with lvm storage
colocation col_r_Failover_IP_r_lvm_vg-storage inf: r_Failover_IP 
r_lvm_vg-storage


# colocate each VM with lvm storage
colocation col_r_vm_ado01_r_lvm_vg-storage inf: r_vm_ado01 r_lvm_vg-storage
colocation col_r_vm_bar01_r_lvm_vg-storage inf: r_vm_bar01 r_lvm_vg-storage
colocation col_r_vm_cmt01_r_lvm_vg-storage inf: r_vm_cmt01 r_lvm_vg-storage
colocation col_r_vm_con01_r_lvm_vg-storage inf: r_vm_jir01 r_lvm_vg-storage
colocation col_r_vm_con02_r_lvm_vg-storage inf: r_vm_con02 r_lvm_vg-storage
colocation col_r_vm_dsm01_r_lvm_vg-storage inf: r_vm_dsm01 r_lvm_vg-storage
colocation col_r_vm_jir01_r_lvm_vg-storage inf: r_vm_con01 r_lvm_vg-storage
colocation col_r_vm_jir02_r_lvm_vg-storage inf: r_vm_jir02 r_lvm_vg-storage
colocation col_r_vm_prx02_r_lvm_vg-storage inf: r_vm_prx02 r_lvm_vg-storage
colocation col_r_vm_src01_r_lvm_vg-storage inf: r_vm_src01 r_lvm_vg-storage


# start lvm storage before VIP

order ord_r_lvm_vg-storage_r_Failover_IP inf: r_lvm_vg-storage r_Failover_IP


# start lvm storage before each VM
order ord_r_lvm_vg-storage_r_vm_ado01 inf: r_lvm_vg-storage r_vm_ado01
order ord_r_lvm_vg-storage_r_vm_bar01 inf: r_lvm_vg-storage r_vm_bar01
order ord_r_lvm_vg-storage_r_vm_cmt01 inf: r_lvm_vg-storage r_vm_cmt01
order ord_r_lvm_vg-storage_r_vm_con01 inf: r_lvm_vg-storage r_vm_con01
order ord_r_lvm_vg-storage_r_vm_con02 inf: r_lvm_vg-storage r_vm_con02
order ord_r_lvm_vg-storage_r_vm_dsm01 inf: r_lvm_vg-storage r_vm_dsm01
order ord_r_lvm_vg-storage_r_vm_jir01 inf: r_lvm_vg-storage r_vm_jir01
order ord_r_lvm_vg-storage_r_vm_jir02 inf: r_lvm_vg-storage r_vm_jir02
order ord_r_lvm_vg-storage_r_vm_prx02 inf: r_lvm_vg-storage r_vm_prx02
order ord_r_lvm_vg-storage_r_vm_src01 inf: r_lvm_vg-storage r_vm_src01

any help is greatly appreciated.

thank you.

Stefan

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org