http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch08.html and https://github.com/ClusterLabs/pacemaker/blob/master/doc/pcs-crmsh-quick-ref.md anyway you can use pcs config if you are using redhat
2015-11-16 15:01 GMT+01:00 Richard Korsten <[email protected]>: > Hi Emmanuel, > > I'm not sure, how can i check it? > > Greetings Richard > > Op ma 16 nov. 2015 om 14:58 schreef emmanuel segura <[email protected]>: >> >> you configured the stonith? >> >> 2015-11-16 14:43 GMT+01:00 Richard Korsten <[email protected]>: >> > Hello Cluster guru's. >> > >> > I'm having a bit of trouble with a cluster of ours. After an outage of 1 >> > node it went into a split brain situation where both nodes aren't >> > talking to >> > each other. Both say the other node is offline. I've tried to get them >> > both >> > up and running again by stopping and starting the cluster services on >> > both >> > nodes, one at a time. with out luck. >> > >> > I've been trying to reproduce the problem with a set of test servers but >> > i >> > can't seem to get it in the same state. >> > >> > Because of this i'm looking for some help because i'm not that known >> > with >> > pacemaker/corosync. >> > >> > this is the output of the command pcs status: >> > Cluster name: MXloadbalancer >> > Last updated: Mon Nov 16 10:18:44 2015 >> > Last change: Fri Nov 6 15:35:22 2015 >> > Stack: corosync >> > Current DC: bckilb01 (1) - partition WITHOUT quorum >> > Version: 1.1.12-a14efad >> > 2 Nodes configured >> > 3 Resources configured >> > >> > Online: [ bckilb01 ] >> > OFFLINE: [ bckilb02 ] >> > >> > Full list of resources: >> > haproxy (systemd:haproxy): Stopped >> > >> > Resource Group: MXVIP >> > ip-192.168.250.200 (ocf::heartbeat:IPaddr2): Stopped >> > ip-192.168.250.201 (ocf::heartbeat:IPaddr2): Stopped >> > >> > PCSD Status: >> > bckilb01: Online >> > bckilb02: Online >> > >> > Daemon Status: >> > corosync: active/enabled >> > pacemaker: active/enabled >> > pcsd: active/enabled >> > >> > >> > And the config: >> > totem { >> > version: 2 >> > secauth: off >> > cluster_name: MXloadbalancer >> > transport: udpu } >> > >> > nodelist { >> > node { ring0_addr: bckilb01 nodeid: 1 } >> > node { ring0_addr: bckilb02 nodeid: 2 } } >> > quorum { provider: corosync_votequorum two_node: 1 } >> > logging { to_syslog: yes } >> > >> > If any has an idea about how to get them working together again please >> > let >> > me know. >> > >> > Greetings Richard >> > >> > _______________________________________________ >> > Users mailing list: [email protected] >> > http://clusterlabs.org/mailman/listinfo/users >> > >> > Project Home: http://www.clusterlabs.org >> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> > Bugs: http://bugs.clusterlabs.org >> > >> >> >> >> -- >> .~. >> /V\ >> // \\ >> /( )\ >> ^`~'^ >> >> _______________________________________________ >> Users mailing list: [email protected] >> http://clusterlabs.org/mailman/listinfo/users >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org > > > _______________________________________________ > Users mailing list: [email protected] > http://clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > -- .~. /V\ // \\ /( )\ ^`~'^ _______________________________________________ Users mailing list: [email protected] http://clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
