>>> "Stephen Carville (HA List)" <62d2a...@opayq.com> schrieb am 31.07.2017 um
20:17 in Nachricht :
> I am experimenting with pacemaker for high availability for some load
> balancers. I was able to sucessfully get two CentOS (6.9) machines
> (scahadev01da and scahadev01db) to form a cluster and t
>>> Octavian Ciobanu schrieb am 31.07.2017 um 20:16 in
Nachricht
:
> Hello,
>
> Before I implement the cluster I'm testing the fence agents and I got stuck
> at the rebooting the VMware based VMs.
>
> I have installed VMware ESXi 6.5 Hypervisor with 5 VMs.
>
> If I call :
> # fence_vmware_s
>>> "Lentes, Bernd" schrieb am 31.07.2017
um
18:51 in Nachricht
<641329685.12981098.1501519915026.javamail.zim...@helmholtz-muenchen.de>:
> Hi,
>
> i'm currently a bit confused. I have several resources running as
> VirtualDomains, the vm reside on plain logical volumes without fs, these
lv's
>
I am experimenting with pacemaker for high availability for some load
balancers. I was able to sucessfully get two CentOS (6.9) machines
(scahadev01da and scahadev01db) to form a cluster and the shared IP was
assigned to scahadev01da. I simulated a failure by halting the primary
and the secondary
Hello,
Before I implement the cluster I'm testing the fence agents and I got stuck
at the rebooting the VMware based VMs.
I have installed VMware ESXi 6.5 Hypervisor with 5 VMs.
If I call :
# fence_vmware_soap --ssl --ip esxi_ip --username root --password pass
--action list
I get the list wi
On 2017-07-31 12:51 PM, Lentes, Bernd wrote:
> Hi,
>
> i'm currently a bit confused. I have several resources running as
> VirtualDomains, the vm reside on plain logical volumes without fs, these lv's
> reside themself on a FC SAN.
> In that scenario i need cLVM to distribute the lvm metadata b
Hi,
i'm currently a bit confused. I have several resources running as
VirtualDomains, the vm reside on plain logical volumes without fs, these lv's
reside themself on a FC SAN.
In that scenario i need cLVM to distribute the lvm metadata between the nodes.
For playing around a bit and getting us
Please ignore my re-reply to the original message, I'm in the middle of
a move and am getting by on little sleep at the moment :-)
On Mon, 2017-07-31 at 09:26 -0500, Ken Gaillot wrote:
> On Mon, 2017-07-24 at 11:51 +, Tomer Azran wrote:
> > Hello,
> >
> >
> >
> > We built a pacemaker clust
On Mon, 2017-07-24 at 11:51 +, Tomer Azran wrote:
> Hello,
>
>
>
> We built a pacemaker cluster with 2 physical servers.
>
> We configured DRBD in Master\Slave setup, a floating IP and file
> system mount in Active\Passive mode.
>
> We configured two STONITH devices (fence_ipmilan), one f
> The "plug" should match the name used by the hypervisor, not the actual
host name (if they differ).
I understand the difference between plug and hostname. I don't clearly
understand which fence config is correct (I reffer to pcs stonith describe
fence_...):
the same entry on every node:
pmck_hos
On 2017-07-31 03:18 AM, ArekW wrote:
> Hi, I'm confused how to properly set stonith when a hostname is
> different than port/plug name. I have 2 vms on vbox/vmware with
> hostnames: node1, node2. The port's names are: Centos1, Centos2.
> According to my understanding the stonith device must know wh
Hi, I'm confused how to properly set stonith when a hostname is
different than port/plug name. I have 2 vms on vbox/vmware with
hostnames: node1, node2. The port's names are: Centos1, Centos2.
According to my understanding the stonith device must know which vm to
control (each other) so I set:
pmck
12 matches
Mail list logo