>
>
> > for the
> > moment. I have done a drbdmanage primary drbdctrl but still the drbd
> > storage is not available. How can I resolve the split brain manually so
> > as the drbd storage continues to work even if pve1 (primary is down).
>
> I guess that this should be left to drbdmanaged
Il 14/03/2017 10:47, Shafeek Sumser ha scritto:
> Hi Yannis,
>
> Thanks for the information you provided.
>
> On pve1, I have initiate the cluster and add the node pve2. When the
> drbdctrl is primary on pve1 (secondary on pve2) and I shutdown the pve2,
> the drbd storage is available. I can
Hi Yannis,
Thanks for the information you provided.
On pve1, I have initiate the cluster and add the node pve2. When the
drbdctrl is primary on pve1 (secondary on pve2) and I shutdown the pve2,
the drbd storage is available. I can do any manipulation and even the VM
is working. But on the
>the drdb storage becomes unavailable >and the drbd quorum is lost..
>From my experience using only 2 nodes on drbd9 does not work well, meaning
that the cluster loose quorum and you have to manually troubleshoot the
split brain.
If you really need a stable system, then use 3 drbd nodes. You
Hello
By "manually" here, I am refering to Proxmox. Since I have not activate HA
on Proxmox, I need to manually move the vm config file
(/etc/pve/nodes/pve2/lxc/100.conf) from pve2 to pve1.
Shafeek
On Mon, Mar 13, 2017 at 4:45 PM, Roberto Resoli
wrote:
> Il 13/03/2017
Hello drbd community,
I am currently installing drbd9 with Proxmox 4.4.
The architecture is as follows:
- 2 servers (pve1 & pve2) cluster for Proxmox and a PC (pve3) to establish
the quorum for proxmox cluster.
- On the 2 servers (pve1 & pve2), drdb9 is installed
On pve1, I initialised the