Hi,
We have dedicated Links for the Storage and the Cluster Communication,
so if only the Storage Links fail Corosync is still working. Maybe i
need to create some Watchdog myself for that specific case, but let's
wait if there is really nothing in Proxmox to handle that Scenario.
Best,
Martin
On Wed, 2018-10-17 at 13:05 +0200, Martin Holub wrote:
> my Test VM, Proxmox seems to not recognize the Storage outtage and
> therefore did not migrate the VM to a different blade or removed that
> Node from the Cluster (either by resetting it or fencing it somehow
> else). Any hints on how to get
What interface is your cluster communication (corosync) running over? As
this is the link that needs to be unavailable to initiate a VM start on
another node AFAIK.
Basically, the other nodes in the cluster need to be seeing a problem with
the node. If its still communicating over the whichever
Hi,
In my specific Test Case i was simulating that only one out of 6 Nodes
is losing connectivity to the Shared Storage. So the other 5 could still
access the Data. In my Opinion Proxmox should be, somehow, able to
detect that and fence that Node, causing a migration (depending on the
HA
On 10/17/18 1:11 PM, Gilberto Nunes wrote:
> Hi
>
> How about Node priority?
> Look section 14.5.2 in this doc
>
> https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_configuration_10
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype:
Hi
How about Node priority?
Look section 14.5.2 in this doc
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_configuration_10
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qua, 17 de out de 2018 às 08:05, Martin Holub
Hi,
I am currently testing the HA features on a 6 Node Cluster and a NetAPP
Storage with iSCSI and multipath configured on all Nodes. I now tried
what happens if, for any reason, booth Links fail (by shutting down the
Interfaces on one Blade). Unfortunately, altough i had configured HA for
my