ebruary 14, 2019 9:16:05 AM
> *To:* Jayme
> *Cc:* Darryl Scott; users
> *Subject:* Re: [ovirt-users] Re: Ovirt Cluster completely unstable
>
>
>
> Il giorno gio 14 feb 2019 alle ore 07:54 Jayme ha
> scritto:
>
> I have a three node HCI gluster which was previously ru
From: Sandro Bonazzola
Sent: Thursday, February 14, 2019 9:16:05 AM
To: Jayme
Cc: Darryl Scott; users
Subject: Re: [ovirt-users] Re: Ovirt Cluster completely unstable
Il giorno gio 14 feb 2019 alle ore 07:54 Jayme
mailto:jay...@gmail.com>> ha scritto:
I have a three node HCI gluste
Just to add, after we updated to 4.3 our gluster just went south.
Thankfully gluster is only secondary storage for us, and our primary
storage is an ISCSI SAN. We migrated everything over to the SAN that we
could, but a few VM's got corrupted by gluster (data was gone). Right now
we just have glus
I do believe something went wrong after fully updating everything last Friday.
I updated all the ovirt compute nodes on Friday and gluster/engine on Saturday.
I have been experiencing these issues every since. I have pour over
engine.log and seems to be connection to storage issue.
Is it this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1651246
On Thu, Feb 14, 2019 at 11:50 AM Jayme wrote:
> [2019-02-14 02:20:29.611099] I [login.c:110:gf_auth] 0-auth/login: allowed
> user names: 7b741fe4-72ca-41ba-8efb-7add1e4fe6f3
> [2019-02-14 02:20:29.611131] I [MSGID: 115029]
> [s
[2019-02-14 02:20:29.611099] I [login.c:110:gf_auth] 0-auth/login: allowed
user names: 7b741fe4-72ca-41ba-8efb-7add1e4fe6f3
[2019-02-14 02:20:29.611131] I [MSGID: 115029]
[server-handshake.c:537:server_setvolume] 0-non_prod_b-server: accepted
client from
CTX_ID:ee716e24-e187-4b57-a371-cab544f41162-
On Thu, Feb 14, 2019 at 8:24 PM Jayme wrote:
> https://bugzilla.redhat.com/show_bug.cgi?id=1677160 doesn't seem relevant
> to me? Is that the correct link?
>
> Like I mentioned in a previous email I'm also having problems with Gluster
> bricks going offline since upgrading to oVirt 4.3 yesterday
Hi Jayme,
btw. in the past there was a long hunting for gluster problems on this list.
as resolution, there was a failed single disk drive on one gluster host.
the drive was direct connected without controller and smart checks,
so no alert was generated, only gluster problems over days.
please
Oliver,
Thanks for the input, I do recall reading that thread before, I'm 99.9%
sure it's not the problem here but I will double check, if anything to rule
it out. These bricks are new enterprise SSDs that are less than 3 months
old with almost 0 wear on them and the issues I'm experiencing only
uary 14, 2019 9:16:05 AM
> *To:* Jayme
> *Cc:* Darryl Scott; users
> *Subject:* Re: [ovirt-users] Re: Ovirt Cluster completely unstable
>
>
>
> Il giorno gio 14 feb 2019 alle ore 07:54 Jayme ha
> scritto:
>
> I have a three node HCI gluster which was previously running 4.
https://bugzilla.redhat.com/show_bug.cgi?id=1677160 doesn't seem relevant
to me? Is that the correct link?
Like I mentioned in a previous email I'm also having problems with Gluster
bricks going offline since upgrading to oVirt 4.3 yesterday (previously
I've never had a single issue with gluster
Il giorno gio 14 feb 2019 alle ore 07:54 Jayme ha
scritto:
> I have a three node HCI gluster which was previously running 4.2 with zero
> problems. I just upgraded it yesterday. I ran in to a few bugs right away
> with the upgrade process, but aside from that I also discovered other users
> wit
On Thu, Feb 14, 2019 at 4:56 AM wrote:
>
> I'm abandoning my production ovirt cluster due to instability. I have a 7
> host cluster running about 300 vms and have been for over a year. It has
> become unstable over the past three days. I have random hosts both, compute
> and storage disconn
Hello,
my problems on gluster started with 4.2.6 or 4.2.7. around end of
September. I still have VM's paused the one or other day an they are
reactivated either by HA oder manually. So i want to testify your
experiences. Even while I'm using bonded network connections there are
communication probl
I have a three node HCI gluster which was previously running 4.2 with zero
problems. I just upgraded it yesterday. I ran in to a few bugs right away
with the upgrade process, but aside from that I also discovered other users
with severe GlusterFS problems since the upgrade to new GlusterFS versio
Hi,
I would have a look at engine.log, it might provide usefull informations.
Also, i would test i different storage type (maybe a quick nfs data domain
) and see if problem persist with that one too.
On Thu, Feb 14, 2019, 01:26 I'm abandoning my production ovirt cluster due to instability. I
16 matches
Mail list logo