Re: [ClusterLabs] IPaddr2 cluster-ip restarts on all nodes after failover
Thank you! That did the trick. /Jocke ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] IPaddr2 cluster-ip restarts on all nodes after failover
On 01/06/2016 02:40 PM, Joakim Hansson wrote: > Hi list! > I'm running a 3-node vm-cluster in which all the nodes run Tomcat (Solr) > from the same disk using GFS2. > On top of this I use IPaddr2-clone for cluster-ip and loadbalancing between > all the nodes. > > Everything works fine, except when i perform a failover on one node. > When node01 shuts down, node02 takes over it's ipaddr-clone. So far so good. > The thing is, when I fire up node01 again all the ipaddr-clones on all > nodes restarts and thereby messes up Tomcat. You want interleave=true on your clones. http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_clone_options > Here is my configuration: > > Cluster Name: GFS2-cluster > Corosync Nodes: > node01 node02 node03 > Pacemaker Nodes: > node01 node02 node03 > > Resources: > Clone: dlm-clone > Meta Attrs: clone-max=3 clone-node-max=1 > Resource: dlm (class=ocf provider=pacemaker type=controld) >Operations: start interval=0s timeout=90 (dlm-start-timeout-90) >stop interval=0s timeout=100 (dlm-stop-timeout-100) >monitor interval=60s (dlm-monitor-interval-60s) > Clone: GFS2-clone > Meta Attrs: clone-max=3 clone-node-max=1 globally-unique=true > Resource: GFS2 (class=ocf provider=heartbeat type=Filesystem) >Attributes: device=/dev/sdb directory=/home/solr fstype=gfs2 >Operations: start interval=0s timeout=60 (GFS2-start-timeout-60) >stop interval=0s timeout=60 (GFS2-stop-timeout-60) >monitor interval=20 timeout=40 (GFS2-monitor-interval-20) > Clone: ClusterIP-clone > Meta Attrs: clone-max=3 clone-node-max=3 globally-unique=true > Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2) >Attributes: ip=192.168.100.200 cidr_netmask=32 clusterip_hash=sourceip >Meta Attrs: resource-stickiness=0 >Operations: start interval=0s timeout=20s (ClusterIP-start-timeout-20s) >stop interval=0s timeout=20s (ClusterIP-stop-timeout-20s) >monitor interval=30s (ClusterIP-monitor-interval-30s) > Clone: Tomcat-clone > Meta Attrs: clone-max=3 clone-node-max=1 > Resource: Tomcat (class=systemd type=tomcat) >Operations: monitor interval=60s (Tomcat-monitor-interval-60s) > > Stonith Devices: > Resource: fence-vmware (class=stonith type=fence_vmware_soap) > Attributes: > pcmk_host_map=node01:4212a559-8e66-2882-e7fe-96e2bd86bfdb;node02:4212150e-2d2d-dc3e-ee16-2eb280db2ec7;node03:42126708-bd46-adc5-75cb-678cdbcc06be > pcmk_host_check=static-list login=USERNAME passwd=PASSWORD action=reboot > ssl_insecure=true ipaddr=IP-ADDRESS > Operations: monitor interval=60s (fence-vmware-monitor-interval-60s) > Fencing Levels: > > Location Constraints: > Ordering Constraints: > start dlm-clone then start GFS2-clone (kind:Mandatory) > (id:order-dlm-clone-GFS2-clone-mandatory) > start GFS2-clone then start Tomcat-clone (kind:Mandatory) > (id:order-GFS2-clone-Tomcat-clone-mandatory) > start Tomcat-clone then start ClusterIP-clone (kind:Mandatory) > (id:order-Tomcat-clone-ClusterIP-clone-mandatory) > stop ClusterIP-clone then stop Tomcat-clone (kind:Mandatory) > (id:order-ClusterIP-clone-Tomcat-clone-mandatory) > stop Tomcat-clone then stop GFS2-clone (kind:Mandatory) > (id:order-Tomcat-clone-GFS2-clone-mandatory) > Colocation Constraints: > GFS2-clone with dlm-clone (score:INFINITY) > (id:colocation-GFS2-clone-dlm-clone-INFINITY) > GFS2-clone with Tomcat-clone (score:INFINITY) > (id:colocation-GFS2-clone-Tomcat-clone-INFINITY) > > Resources Defaults: > resource-stickiness: 100 > Operations Defaults: > No defaults set > > Cluster Properties: > cluster-infrastructure: corosync > cluster-name: GFS2-cluster > dc-version: 1.1.13-10.el7-44eb2dd > enabled: false > have-watchdog: false > last-lrm-refresh: 1450177886 > stonith-enabled: true > > > Any help is greatly appreciated. > > Thanks in advance > /Jocke ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
[ClusterLabs] IPaddr2 cluster-ip restarts on all nodes after failover
Hi list! I'm running a 3-node vm-cluster in which all the nodes run Tomcat (Solr) from the same disk using GFS2. On top of this I use IPaddr2-clone for cluster-ip and loadbalancing between all the nodes. Everything works fine, except when i perform a failover on one node. When node01 shuts down, node02 takes over it's ipaddr-clone. So far so good. The thing is, when I fire up node01 again all the ipaddr-clones on all nodes restarts and thereby messes up Tomcat. Here is my configuration: Cluster Name: GFS2-cluster Corosync Nodes: node01 node02 node03 Pacemaker Nodes: node01 node02 node03 Resources: Clone: dlm-clone Meta Attrs: clone-max=3 clone-node-max=1 Resource: dlm (class=ocf provider=pacemaker type=controld) Operations: start interval=0s timeout=90 (dlm-start-timeout-90) stop interval=0s timeout=100 (dlm-stop-timeout-100) monitor interval=60s (dlm-monitor-interval-60s) Clone: GFS2-clone Meta Attrs: clone-max=3 clone-node-max=1 globally-unique=true Resource: GFS2 (class=ocf provider=heartbeat type=Filesystem) Attributes: device=/dev/sdb directory=/home/solr fstype=gfs2 Operations: start interval=0s timeout=60 (GFS2-start-timeout-60) stop interval=0s timeout=60 (GFS2-stop-timeout-60) monitor interval=20 timeout=40 (GFS2-monitor-interval-20) Clone: ClusterIP-clone Meta Attrs: clone-max=3 clone-node-max=3 globally-unique=true Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.100.200 cidr_netmask=32 clusterip_hash=sourceip Meta Attrs: resource-stickiness=0 Operations: start interval=0s timeout=20s (ClusterIP-start-timeout-20s) stop interval=0s timeout=20s (ClusterIP-stop-timeout-20s) monitor interval=30s (ClusterIP-monitor-interval-30s) Clone: Tomcat-clone Meta Attrs: clone-max=3 clone-node-max=1 Resource: Tomcat (class=systemd type=tomcat) Operations: monitor interval=60s (Tomcat-monitor-interval-60s) Stonith Devices: Resource: fence-vmware (class=stonith type=fence_vmware_soap) Attributes: pcmk_host_map=node01:4212a559-8e66-2882-e7fe-96e2bd86bfdb;node02:4212150e-2d2d-dc3e-ee16-2eb280db2ec7;node03:42126708-bd46-adc5-75cb-678cdbcc06be pcmk_host_check=static-list login=USERNAME passwd=PASSWORD action=reboot ssl_insecure=true ipaddr=IP-ADDRESS Operations: monitor interval=60s (fence-vmware-monitor-interval-60s) Fencing Levels: Location Constraints: Ordering Constraints: start dlm-clone then start GFS2-clone (kind:Mandatory) (id:order-dlm-clone-GFS2-clone-mandatory) start GFS2-clone then start Tomcat-clone (kind:Mandatory) (id:order-GFS2-clone-Tomcat-clone-mandatory) start Tomcat-clone then start ClusterIP-clone (kind:Mandatory) (id:order-Tomcat-clone-ClusterIP-clone-mandatory) stop ClusterIP-clone then stop Tomcat-clone (kind:Mandatory) (id:order-ClusterIP-clone-Tomcat-clone-mandatory) stop Tomcat-clone then stop GFS2-clone (kind:Mandatory) (id:order-Tomcat-clone-GFS2-clone-mandatory) Colocation Constraints: GFS2-clone with dlm-clone (score:INFINITY) (id:colocation-GFS2-clone-dlm-clone-INFINITY) GFS2-clone with Tomcat-clone (score:INFINITY) (id:colocation-GFS2-clone-Tomcat-clone-INFINITY) Resources Defaults: resource-stickiness: 100 Operations Defaults: No defaults set Cluster Properties: cluster-infrastructure: corosync cluster-name: GFS2-cluster dc-version: 1.1.13-10.el7-44eb2dd enabled: false have-watchdog: false last-lrm-refresh: 1450177886 stonith-enabled: true Any help is greatly appreciated. Thanks in advance /Jocke ___ Users mailing list: Users@clusterlabs.org http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org