On 02/08/2018 02:05 PM, Andrei Borzenkov wrote: > On Thu, Feb 8, 2018 at 5:51 AM, 范国腾 <fanguot...@highgo.com> wrote: >> Hello, >> >> I setup the pacemaker cluster using virtualbox. There are three nodes. The >> OS is centos7, the /dev/sdb is the shared storage(three nodes use the same >> disk file). >> >> (1) At first, I create the stonith using this command: >> pcs stonith create scsi-stonith-device fence_scsi devices=/dev/mapper/fence >> pcmk_monitor_action=metadata pcmk_reboot_action=off pcmk_host_list="db7-1 >> db7-2 db7-3" meta provides=unfencing; >> >> I know the VM not have the /dev/mapper/fence. But sometimes the stonith >> resource able to start, sometimes not. Don't know why. It is not stable. >> > It probably tries to check resource and fails. State of stonith > resource is irrelevant for actual fencing operation (this resource is > only used for periodical check, not for fencing itself). > >> (2) Then I use the following command to setup stonith using the shared disk >> /dev/sdb: >> pcs stonith create scsi-shooter fence_scsi >> devices=/dev/disk/by-id/ata-VBOX_HARDDISK_VBc833e6c6-af12c936 meta >> provides=unfencing >> >> But the stonith always be stopped and the log show: >> Feb 7 15:45:53 db7-1 stonith-ng[8166]: warning: fence_scsi[8197] stderr: [ >> Failed: nodename or key is required ] >> > Well, you need to provide what is missing - your command did not > specify any host. > >> Could anyone help tell what is the correct command to setup the stonith in >> VM and centos? Is there any document to introduce this so that I could study >> it?
I personally don't have any experience setting up a pacemaker-cluster in vbox. Thus I'm limited to giving rather general advice. What you might have to assure together with fence_scsi is if the scsi-emulation vbox offers lives up to the requirements of fence_scsi. I've read about troubles in a posting back from 2015. The guy then went for using scsi via iSCSI. Otherwise you could look for alternatives to fence_scsi. One might be fence_vbox. It doesn't come with centos so far iirc but the upstream repo on github has it. Fencing via the hypervisor is in general not a bad idea when it comes to clusters running in VMs (If you can live with the boundary conditions like giving certain credentials to the VMs that allow communication with the hypervisor.). There was some discussion about fence_vbox on the clusterlabs-list a couple of months ago. iirc there had been issues with using windows as a host for vbox - but I guess they were fixed in the course of this discussion. Another way of doing fencing via a shared disk is fence_sbd (available in centos) - although quite different from how fence_scsi is using the disk. One difference that might be helpful here is that it has less requirements on which disk-infrastructure is emulated. On the other hand it is strongly advised for sbd in general to use a good watchdog device (one that brings down your machine - virtual or physical - in a very reliable manner). And afaik the only watchdog-device available inside a vbox VM is softdog that doesn't meet this requirement too well as it relies on the kernel running in the VM to be at least partially functional. Sorry for not being able to help in a more specific way but I would be interested in which ways of fencing people are using when it comes to clusters based on vbox VMs myself ;-) Regards, Klaus >> >> >> Thanks >> >> >> Here is the cluster status: >> [root@db7-1 ~]# pcs status >> Cluster name: cluster_pgsql >> Stack: corosync >> Current DC: db7-2 (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum >> Last updated: Wed Feb 7 16:27:13 2018 >> Last change: Wed Feb 7 15:42:38 2018 by root via cibadmin on db7-1 >> >> 3 nodes configured >> 1 resource configured >> >> Online: [ db7-1 db7-2 db7-3 ] >> >> Full list of resources: >> >> scsi-shooter (stonith:fence_scsi): Stopped >> >> Daemon Status: >> corosync: active/disabled >> pacemaker: active/disabled >> pcsd: active/enabled >> _______________________________________________ >> Users mailing list: Users@clusterlabs.org >> http://lists.clusterlabs.org/mailman/listinfo/users >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: http://bugs.clusterlabs.org > _______________________________________________ > Users mailing list: Users@clusterlabs.org > http://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org _______________________________________________ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org