[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-08-12 Thread Olaf Buitelaar
Hi Strahil, Ok done; https://bugzilla.redhat.com/show_bug.cgi?id=1868393 only it didn't allow me to select the most recent 4.3. Thanks Olaf Op wo 12 aug. 2020 om 15:58 schreef Strahil Nikolov : > Hi Olaf, > > yes but mark it as '[RFE]' in the name of the bug. > > Best Regards, > Strahil

[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-08-12 Thread olaf . buitelaar
Hi Strahil, It's not really clear how i can pull requests to the oVirt repo. I've found this bugzilla issue for going from v5 to v6; https://bugzilla.redhat.com/show_bug.cgi?id=1718162 with this corresponding commit; https://gerrit.ovirt.org/#/c/100701/ Would the correct route be to issue a

[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-08-11 Thread Olaf Buitelaar
Hi Strahil, Thanks for confirming v7 is working fine with oVirt 4.3, it being from you, gives quite some faith. If that's generally the case it would be nice if the yum repo ovirt-4.3-dependencies.repo could be updated to the gluster - v7 in the official repository e.g.;

[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-08-11 Thread olaf . buitelaar
Dear oVirt users, any news on the gluster support side on oVirt 4.3. With 6.10 being possibly the latest release, it would be nice if there is an known stable upgrade path to either gluster 7 and possibly 8 for the oVirt 4.3 branch. Thanks Olaf ___

[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-06-19 Thread olaf . buitelaar
Dear oVirt users, I was wondering with the release of 4.4, but having a quite difficult upgrade path; reinstalling the engine, and moving all machines to rhel/centos 8. Are there any plans to update the gluster dependencies to version 7 in the the ovirt-4.3-dependencies.repo? Or will oVirt 4.3

[ovirt-users] Re: [Gluster-users] Image File Owner change Situation. (root:root)

2020-03-13 Thread Olaf Buitelaar
Hi Robert, there were serveral issues with ownership in ovirt, for example see; https://bugzilla.redhat.com/show_bug.cgi?id=1666795 Maybe you're encountering these issues during the upgrade process. Also if you're using gluster as backend storage, there might be some permission issues in the 6.7

[ovirt-users] Re: change connection string in db

2019-10-14 Thread olaf . buitelaar
Dear Strahil, Thanks that was it, din't know about the mnt_options, will add those as well. Best Olaf ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement:

[ovirt-users] Re: change connection string in db

2019-10-01 Thread olaf . buitelaar
Never mind again, for those looking for the same thing, it's via the; hosted-engine --set-shared-config storage 10.201.0.1:/ovirt-engine --type=he_shared hosted-engine --set-shared-config storage 10.201.0.1:/ovirt-engine --type=he_local ___ Users

[ovirt-users] Re: change connection string in db

2019-09-30 Thread olaf . buitelaar
Dear oVirt users, one thing i still cannot find out, is where the engine gathers the storage= value from in the /etc/ovirt-hosted-engine/hosted-engine.conf I suppose it's somewhere in a answers file, but i cannot find it. Any points are appreciated. hopfully this is the last place where the old

[ovirt-users] Re: change connection string in db

2019-09-29 Thread olaf . buitelaar
Dear oVirt users, Sorry for having bothered you, it appeared the transaction in the database somehow wasn't commited correctly. After ensuring that, the mountpoints updated. Best Olaf ___ Users mailing list -- users@ovirt.org To unsubscribe send an

[ovirt-users] change connection string in db

2019-09-28 Thread olaf . buitelaar
Dear oVirt users, I'm currently migrating our gluster setup, so i've done a gluster replace brick to the new machines. Now i'm trying to update the connection strings of the related storage domains including the one hosting the ovirt-engine (which i believe cannot be brought down for

[ovirt-users] Re: HostedEngine cleaned up

2019-05-10 Thread olaf . buitelaar
Hi Dimitry, Sorry for not being clearer, I've missed the part the ls was from the underlying brick. Than i've clearly a different issue. Best Olaf ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy

[ovirt-users] Re: HostedEngine cleaned up

2019-05-09 Thread olaf . buitelaar
This listing is from a gluster mount not from the underlying brick, which should combine all parts from the underlying .glusterfs folder. I believe when you make use of the feature.shard the files should be broken up in peaces according the shard-size. Olaf

[ovirt-users] Re: HostedEngine cleaned up

2019-05-09 Thread olaf . buitelaar
It looks like i've got the exact same issue; drwxr-xr-x. 2 vdsm kvm 4.0K Mar 29 16:01 . drwxr-xr-x. 22 vdsm kvm 4.0K Mar 29 18:34 .. -rw-rw. 1 vdsm kvm 64M Feb 4 01:32 44781cef-173a-4d84-88c5-18f7310037b4 -rw-rw. 1 vdsm kvm 1.0M Oct 16 2018 44781cef-173a-4d84-88c5-18f7310037b4.lease

[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-05-02 Thread olaf . buitelaar
Sorry it appears the messages about; Get Host Statistics failed: Internal JSON-RPC error: {'reason': '[Errno 19] veth18ae509 is not present in the system'} aren't gone, just are happening much less frequent. Best Olaf ___ Users mailing list --

[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-04-29 Thread olaf . buitelaar
Dear Mohit, I've upgraded to gluster 5.6, however the starting of multiple glusterfsd processed per brick doesn't seems to be fully resolved yet. However it does seem to happen less than before. Also in some cases glusterd did seem to detect a glusterfsd was running, but decided it was not

[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-04-03 Thread Olaf Buitelaar
9-04-01 > 10:23:21.752432] > > I will backport the same. > Thanks, > Mohit Agrawal > > On Wed, Apr 3, 2019 at 3:58 PM Olaf Buitelaar > wrote: > >> Dear Mohit, >> >> Sorry i thought Krutika was referring to the ovirt-kube brick logs. due >> the la

[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-03-28 Thread olaf . buitelaar
Forgot one more issue with ovirt, on some hypervisor nodes we also run docker, it appears vdsm tries to get an hold of the interfaces docker creates/removes and this is spamming the vdsm and engine logs with; Get Host Statistics failed: Internal JSON-RPC error: {'reason': '[Errno 19]

[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-03-28 Thread olaf . buitelaar
Dear All, I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a different experience. After first trying a test upgrade on a 3 node setup, which went fine. i headed to upgrade the 9 node production

[ovirt-users] Re: [Gluster-users] VMs paused - unknown storage error - Stale file handle - distribute 2 - replica 3 volume with sharding

2019-01-17 Thread olaf . buitelaar
Hi Marco, It looks like I'm suffering form the same issue, see; https://lists.gluster.org/pipermail/gluster-users/2019-January/035602.html I've included a simple github gist there, which you can run on the machines with the stale shards. However i haven't tested the full purge, it works well