Sorry it appears the messages about; Get Host Statistics failed: Internal
JSON-RPC error: {'reason': '[Errno 19] veth18ae509 is not present in the
system'} aren't gone, just are happening much less frequent.
Best Olaf
___
Users mailing list --
Dear Mohit,
I've upgraded to gluster 5.6, however the starting of multiple glusterfsd
processed per brick doesn't seems to be fully resolved yet. However it does
seem to happen less than before. Also in some cases glusterd did seem to detect
a glusterfsd was running, but decided it was not
Hi,
Thanks Olaf for sharing the relevant logs.
@Atin,
You are right patch https://review.gluster.org/#/c/glusterfs/+/22344/ will
resolve the issue running multiple brick instance for same brick.
As we can see in below logs glusterd is trying to start the same brick
instance twice at the same
Dear Mohit,
Thanks for backporting this issue. Hopefully we can address the others as
well, if i can do anything let me know.
On my side i've tested with: gluster volume reset
cluster.choose-local, but haven't noticed really a change in performance.
On the good side, the brick processes didn't
Hi Olaf,
As per current attached "multi-glusterfsd-vol3.txt |
multi-glusterfsd-vol4.txt" it is showing multiple processes are running
for "ovirt-core ovirt-engine" brick names but there are no logs available
in bricklogs.zip specific to this bricks, bricklogs.zip
has a dump of ovirt-kube
Adding back gluster-users
Comments inline ...
On Fri, Mar 29, 2019 at 8:11 PM Olaf Buitelaar
wrote:
> Dear Krutika,
>
>
>
> 1. I’ve made 2 profile runs of around 10 minutes (see files
> profile_data.txt and profile_data2.txt). Looking at it, most time seems be
> spent at the fop’s fsync and
I’ve also encounter multiple brick processes (glusterfsd) being spawned per
brick directory on gluster 5.5 while upgrading from 3.12.15. In my case, it’s
on a stand alone server cluster that doesn’t have ovirt installed, so it seems
to be gluster itself.
Haven’t had the chance to followup on
Il giorno gio 28 mar 2019 alle ore 17:48 ha
scritto:
> Dear All,
>
> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a
> different experience. After first trying a test upgrade on a 3 node setup,
>
Questions/comments inline ...
On Thu, Mar 28, 2019 at 10:18 PM wrote:
> Dear All,
>
> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a
> different experience. After first trying a test upgrade on a 3
Olaf, thank you very much for this feedback, I was just about to upgrade my
12 nodes 4.2.8 production cluster. And it seem so that you speared me of a
lot of trouble.
Though, I thought that 4.3.1 comes with gluster 5.5 which has been solved
the issues, and the upgrade procedure works seemless.
Not
Forgot one more issue with ovirt, on some hypervisor nodes we also run docker,
it appears vdsm tries to get an hold of the interfaces docker creates/removes
and this is spamming the vdsm and engine logs with;
Get Host Statistics failed: Internal JSON-RPC error: {'reason': '[Errno 19]
Dear All,
I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While previous
upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a different
experience. After first trying a test upgrade on a 3 node setup, which went
fine. i headed to upgrade the 9 node production
Following up on this, my test/dev cluster is now completely upgraded to ovirt
4.3.2-1 and gluster5.5 and I’ve bumped the op-version on the gluster volumes.
It’s behaving normally and gluster is happy, no excessive healing or crashing
bricks.
I did encounter
I’m not quite done with my test upgrade to ovirt 4.3.x with gluster 5.5, but so
far it’s looking good. I have NOT encountered the upgrade bugs listed as
resolved in the 5.5 release notes. Strahil, I didn’t encounter the brick death
issue and don’t have a bug ID handy for it, but so far I
14 matches
Mail list logo