Hi!
It seems your systems run with non-operative fencing, and the cluster wants to
fence a node. Maybe bring the cluster to a clean state first, then repeat the
test.
Regards,
Ulrich
>>> "Lentes, Bernd" schrieb am 03.12.2018
um
16:40 in Nachricht
ClusterLabs is happy to announce fence-agents v4.3.3, which is a
bugfix release for v4.3.2.
The source code is available at:
https://github.com/ClusterLabs/fence-agents/releases/tag/v4.3.3
The most significant enhancements in this release are:
- bugfixes and enhancements:
- build: fix issues
On 28/11/2018 08:34, Jan Friesse wrote:
Anyway, problem is solved and if it appears again, please try to check
that corosync.conf is equal on all nodes.
I'd propose that (if devel wizzs read here) that some checks in pcs
should be implemented to account for ruby (variants/versions
On Mon, 2018-12-03 at 00:05 -0700, Casey & Gina wrote:
> So I've been using the fence_vmware_rest fence agent for a long while
> now. It seems to work great, except that after a few days or weeks,
> a given cluster will end up showing it as failed and stopped.
>
> For whatever reason, fencing
Hi,
i have a two node cluster with several VirtualDomains as resources. Normally
live migration is no problem. But rarely it fails, without giving any
reasonable
message in the logs. I tried to migrate several VirtualDmains concurrently from
ha-idg-2 to ha-idg-1. One VirtualDomain failed, the
"Lentes, Bernd" writes:
> 2018-12-03T16:03:02.836145+01:00 ha-idg-2 libvirtd[3117]: 2018-12-03
> 15:03:02.835+: 4515: error : qemuMigrationCheckJobStatus:1456 : operation
> failed: migration job: unexpectedly failed
The above message is a hint at the real problem. It comes from
libvirtd,