Hi Strahil,
Ok done; https://bugzilla.redhat.com/show_bug.cgi?id=1868393 only it didn't
allow me to select the most recent 4.3.
Thanks Olaf
Op wo 12 aug. 2020 om 15:58 schreef Strahil Nikolov :
> Hi Olaf,
>
> yes but mark it as '[RFE]' in the name of the bug.
>
> Best Regards,
> Strahil Nikolo
Hi Strahil,
It's not really clear how i can pull requests to the oVirt repo.
I've found this bugzilla issue for going from v5 to v6;
https://bugzilla.redhat.com/show_bug.cgi?id=1718162 with this corresponding
commit; https://gerrit.ovirt.org/#/c/100701/
Would the correct route be to issue a bugz
Hi Strahil,
Thanks for confirming v7 is working fine with oVirt 4.3, it being from you,
gives quite some faith.
If that's generally the case it would be nice if the yum repo
ovirt-4.3-dependencies.repo
could be updated to the gluster - v7 in the official repository e.g.;
[ovirt-4.3-centos-gluster7
Dear oVirt users,
any news on the gluster support side on oVirt 4.3. With 6.10 being possibly the
latest release, it would be nice if there is an known stable upgrade path to
either gluster 7 and possibly 8 for the oVirt 4.3 branch.
Thanks Olaf
___
Us
Dear oVirt users,
I was wondering with the release of 4.4, but having a quite difficult upgrade
path; reinstalling the engine, and moving all machines to rhel/centos 8.
Are there any plans to update the gluster dependencies to version 7 in the the
ovirt-4.3-dependencies.repo? Or will oVirt 4.3 a
Hi Robert,
there were serveral issues with ownership in ovirt, for example see;
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Maybe you're encountering these issues during the upgrade process. Also if
you're using gluster as backend storage, there might be some permission
issues in the 6.7 o
Dear Strahil,
Thanks that was it, din't know about the mnt_options, will add those as well.
Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-p
Never mind again,
for those looking for the same thing, it's via the;
hosted-engine --set-shared-config storage 10.201.0.1:/ovirt-engine
--type=he_shared
hosted-engine --set-shared-config storage 10.201.0.1:/ovirt-engine
--type=he_local
___
Users mailin
Dear oVirt users,
one thing i still cannot find out, is where the engine gathers the storage=
value from in the /etc/ovirt-hosted-engine/hosted-engine.conf
I suppose it's somewhere in a answers file, but i cannot find it.
Any points are appreciated. hopfully this is the last place where the old
Dear oVirt users,
Sorry for having bothered you, it appeared the transaction in the database
somehow wasn't commited correctly.
After ensuring that, the mountpoints updated.
Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an ema
Dear oVirt users,
I'm currently migrating our gluster setup, so i've done a gluster replace brick
to the new machines.
Now i'm trying to update the connection strings of the related storage domains
including the one hosting the ovirt-engine (which i believe cannot be brought
down for maintenanc
Hi Dimitry,
Sorry for not being clearer, I've missed the part the ls was from the
underlying brick. Than i've clearly a different issue.
Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy S
This listing is from a gluster mount not from the underlying brick, which
should combine all parts from the underlying .glusterfs folder. I believe when
you make use of the feature.shard the files should be broken up in peaces
according the shard-size.
Olaf
_
It looks like i've got the exact same issue;
drwxr-xr-x. 2 vdsm kvm 4.0K Mar 29 16:01 .
drwxr-xr-x. 22 vdsm kvm 4.0K Mar 29 18:34 ..
-rw-rw. 1 vdsm kvm 64M Feb 4 01:32 44781cef-173a-4d84-88c5-18f7310037b4
-rw-rw. 1 vdsm kvm 1.0M Oct 16 2018
44781cef-173a-4d84-88c5-18f7310037b4.lease
Sorry it appears the messages about; Get Host Statistics failed: Internal
JSON-RPC error: {'reason': '[Errno 19] veth18ae509 is not present in the
system'} aren't gone, just are happening much less frequent.
Best Olaf
___
Users mailing list -- users@o
Dear Mohit,
I've upgraded to gluster 5.6, however the starting of multiple glusterfsd
processed per brick doesn't seems to be fully resolved yet. However it does
seem to happen less than before. Also in some cases glusterd did seem to detect
a glusterfsd was running, but decided it was not vali
:21.748492] and [2019-04-01
> 10:23:21.752432]
>
> I will backport the same.
> Thanks,
> Mohit Agrawal
>
> On Wed, Apr 3, 2019 at 3:58 PM Olaf Buitelaar
> wrote:
>
>> Dear Mohit,
>>
>> Sorry i thought Krutika was referring to the ovirt-kube brick logs.
Forgot one more issue with ovirt, on some hypervisor nodes we also run docker,
it appears vdsm tries to get an hold of the interfaces docker creates/removes
and this is spamming the vdsm and engine logs with;
Get Host Statistics failed: Internal JSON-RPC error: {'reason': '[Errno 19]
veth7611c53
Dear All,
I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While previous
upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a different
experience. After first trying a test upgrade on a 3 node setup, which went
fine. i headed to upgrade the 9 node production platform
Hi Marco,
It looks like I'm suffering form the same issue, see;
https://lists.gluster.org/pipermail/gluster-users/2019-January/035602.html
I've included a simple github gist there, which you can run on the machines
with the stale shards.
However i haven't tested the full purge, it works well on
20 matches
Mail list logo