Well, now that I’ve gone and read through that bug again in detail, I’m not
sure I’ve worked around it after all. I do seem to recall additional discussion
on the original bug for HA engine ligfapi a mention that RR-DNS would work to
resolve the issue, but can’t remember the bug ID at the
On February 13, 2020 11:51:41 PM GMT+02:00, Stephen Panicho
wrote:
>Darrell, would you care to elaborate on your HA workaround?
>
>As far as I understand, only the primary Gluster host is visible to
>libvirt
>when using gfapi, so if that host goes down, all VMs break. I imagine
>you're using a
Darrell, would you care to elaborate on your HA workaround?
As far as I understand, only the primary Gluster host is visible to libvirt
when using gfapi, so if that host goes down, all VMs break. I imagine
you're using a round-robin DNS entry for the primary Gluster host, but I'd
like to be sure.
I really wish these bugs would get more attention. I struggle to understand
why this isn't a priority given the performance increase's people are
reporting when switching to libgfapi. No snapshots is a deal breaker for me
unfortunately.
On Wed, Feb 12, 2020 at 12:01 PM Darrell Budic
wrote:
>
Yes. I’m using libgfapi access on gluster 6.7 with overt 4.3.8 just fine, but I
don’t use snapshots. You can work around the HA issue with DNS and backup
server entries on the storage domain as well. Worth it to me for the
performance, YMMV.
> On Feb 12, 2020, at 8:04 AM, Jayme wrote:
>
>
>From my understanding it's not a default option but many users are still
using libgfapi successfully. I'm not sure about its status in the latest
4.3.8 release but I know it is/was working for people in previous versions.
The libgfapi bugs affect HA and snapshots (on 3 way replica HCI) but it
Libgfapi is not supported because of an old bug in qemu. That qemu bug is
slowly getting fixed, but the bugs about Libgfapi support in ovirt have
since been closed as WONTFIX and DEFERRED
See :
https://bugzilla.redhat.com/show_bug.cgi?id=1465810
https://bugzilla.redhat.com/show_bug.cgi?id=1484660
I used the cockpit-based hc setup and "option rpc-auth-allow-insecure" is
absent from /etc/glusterfs/glusterd.vol.
I'm going to redo the cluster this week and report back. Thanks for the tip!
On Mon, Feb 10, 2020 at 6:01 PM Darrell Budic
wrote:
> The hosts will still mount the volume via FUSE,
The hosts will still mount the volume via FUSE, but you might double check you
set the storage up as Gluster and not NFS.
Then gluster used to need some config in glusterd.vol to set
option rpc-auth-allow-insecure on
I’m not sure if that got added to a hyper converged setup or not, but
No, this was a relatively new cluster-- only a couple days old. Just a
handful of VMs including the engine.
On Mon, Feb 10, 2020 at 5:26 PM Jayme wrote:
> Curious do the vms have active snapshots?
>
> On Mon, Feb 10, 2020 at 5:59 PM wrote:
>
>> Hello, all. I have a 3-node Hyperconverged oVirt
Curious do the vms have active snapshots?
On Mon, Feb 10, 2020 at 5:59 PM wrote:
> Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on
> CentOS 7.7 hosts. I was investigating poor Gluster performance and heard
> about libgfapi, so I thought I'd give it a shot. Looking
11 matches
Mail list logo