[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-08-12 Thread Olaf Buitelaar
Hi Strahil,

Ok done; https://bugzilla.redhat.com/show_bug.cgi?id=1868393 only it didn't
allow me to select the most recent 4.3.

Thanks Olaf

Op wo 12 aug. 2020 om 15:58 schreef Strahil Nikolov :

> Hi Olaf,
>
> yes but mark it as  '[RFE]' in the name of the bug.
>
> Best Regards,
> Strahil Nikolov
>
> На 12 август 2020 г. 12:41:55 GMT+03:00, olaf.buitel...@gmail.com написа:
> >Hi Strahil,
> >
> >It's not really clear how i can pull requests to the oVirt repo.
> >I've found this bugzilla issue for going from v5 to v6;
> >https://bugzilla.redhat.com/show_bug.cgi?id=1718162 with this
> >corresponding commit; https://gerrit.ovirt.org/#/c/100701/
> >Would the correct route be to issue a bugzilla request for this?
> >
> >Thanks Olaf
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDGSHGWMDKC5OLYXNBU6HCK56XMCKT2R/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AECOZINFZHHFJHJLLWGJ6GYOFIMTVVZO/


[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-08-12 Thread olaf . buitelaar
Hi Strahil,

It's not really clear how i can pull requests to the oVirt repo.
I've found this bugzilla issue for going from v5 to v6; 
https://bugzilla.redhat.com/show_bug.cgi?id=1718162 with this corresponding 
commit; https://gerrit.ovirt.org/#/c/100701/
Would the correct route be to issue a bugzilla request for this?

Thanks Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDGSHGWMDKC5OLYXNBU6HCK56XMCKT2R/


[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-08-11 Thread Olaf Buitelaar
Hi Strahil,

Thanks for confirming v7 is working fine with oVirt 4.3, it being from you,
gives quite some faith.
If that's generally the case it would be nice if the yum repo
ovirt-4.3-dependencies.repo
could be updated to the gluster - v7 in the official repository e.g.;
[ovirt-4.3-centos-gluster7]
name=CentOS-$releasever - Gluster 7
baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-7/
gpgcheck=1
enabled=1
gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Storage

keeping the gluster support up-to-date, gives at least some users the time
to plan the upgrade path to oVirt 4.4, while remaining to not to run on EOL
gluster's.

Thanks Olaf


Op di 11 aug. 2020 om 19:35 schreef Strahil Nikolov :

> I have been using v7 for quite some time.
>
>
> Best Regards,
> Strahil Nikolov
>
> На 11 август 2020 г. 15:26:51 GMT+03:00, olaf.buitel...@gmail.com написа:
> >Dear oVirt users,
> >
> >any news on the gluster support side on oVirt 4.3. With 6.10 being
> >possibly the latest release, it would be nice if there is an known
> >stable upgrade path to either gluster 7 and possibly 8 for the oVirt
> >4.3 branch.
> >
> >Thanks Olaf
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CPHD2NBWBSAL2UQ7JNR5A266OWZ4XU2T/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7Y5CN5N7GTZFNUWEUP5XHTBDOQCFJER/


[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-08-11 Thread olaf . buitelaar
Dear oVirt users,

any news on the gluster support side on oVirt 4.3. With 6.10 being possibly the 
latest release, it would be nice if there is an known stable upgrade path to 
either gluster 7 and possibly 8 for the oVirt 4.3 branch.

Thanks Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CPHD2NBWBSAL2UQ7JNR5A266OWZ4XU2T/


[ovirt-users] Re: oVirt 4.4.0 Release is now generally available

2020-06-19 Thread olaf . buitelaar
Dear oVirt users,

I was wondering with the release of 4.4, but having a quite difficult upgrade 
path; reinstalling the engine, and moving all machines to rhel/centos 8.
Are there any plans to update the gluster dependencies to version 7 in the the 
ovirt-4.3-dependencies.repo? Or will oVirt 4.3 always be stuck at gluster 
version 6?

Thanks Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PBCMNR7RDGGANVAJM3KF7V6Y3G4NV27L/


[ovirt-users] Re: [Gluster-users] Image File Owner change Situation. (root:root)

2020-03-13 Thread Olaf Buitelaar
Hi Robert,

there were serveral issues with ownership in ovirt, for example see;
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
Maybe you're encountering these issues during the upgrade process. Also if
you're using gluster as backend storage, there might be some permission
issues in the 6.7 or below branches. not sure about the newest version's.

Best Olaf

Op vr 13 mrt. 2020 om 16:55 schreef Robert O'Kane :

> Hello @All,
>
> This has happened to us for the first time and only on One VM.
>
> I believe it happened with the switch from Fuse to "LibgfApi" in Ovirt.
>
> I was using LibgfApiSupported=True on 4.2.8 . I upgraded to 4.3.8 and did
> NOT restart all of my VMs (30+)
>
> but only some VMs, no problem. Eventually I noticed that
> LibgfApiSupported=False  and reset it to True.
>
> The VM was Running WindowsServer2016 and we did a Cold-Reboot (not VM
> Restart). It did not come back online
> due to "Invalid Volume" which was eventually due to the IMAGEs (Boot and
> Data) being  user:group=root but
> not the meta,lease or directory. Nor any other VM have/had this problem
> but they were (if at all) completely
> stopped and restarted.
>
> I am looking for another VM that has not yet been restarted to test this
> theory. I thought this would be
> interesting for others looking into this problem.
>
> (I will ask my Colleague next week what he means with Cold-Reboot vs
> Warm-reboot)
>
> Stay Healthy.
>
> Cheers,
>
> Robert O'Kane
>
>
>
> --
> Robert O'Kane
> Systems Administrator
> Kunsthochschule für Medien Köln
> Peter-Welter-Platz 2
> 50676 Köln
>
> fon: +49(221)20189-223
> fax: +49(221)20189-49223
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> gluster-us...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KA5VG2TB2ENZ53BBLZGDJQTZWYIBRMBI/


[ovirt-users] Re: change connection string in db

2019-10-14 Thread olaf . buitelaar
Dear Strahil,

Thanks that was it, din't know about the mnt_options, will add those as well.

Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJGVJZABNEBNHYCFSJZ5HMASXEPBILBY/


[ovirt-users] Re: change connection string in db

2019-10-01 Thread olaf . buitelaar
Never mind again,
for those looking for the same thing, it's via the;
hosted-engine --set-shared-config storage 10.201.0.1:/ovirt-engine 
--type=he_shared
hosted-engine --set-shared-config storage 10.201.0.1:/ovirt-engine 
--type=he_local
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSMQWNZ4UGCWY2FVBMSSNXTNLR5KP5P2/


[ovirt-users] Re: change connection string in db

2019-09-30 Thread olaf . buitelaar
Dear oVirt users,

one thing i still cannot find out, is where the engine gathers the storage= 
value from in the /etc/ovirt-hosted-engine/hosted-engine.conf
I suppose it's somewhere in a answers file, but i cannot find it.
Any points are appreciated. hopfully this is the last place where the old 
connection string lives.

Thanks Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GPPKRN5YLUMEJR5JV2IXFFXN73VZFGO3/


[ovirt-users] Re: change connection string in db

2019-09-29 Thread olaf . buitelaar
Dear oVirt users,

Sorry for having bothered you, it appeared the transaction in the database 
somehow wasn't commited correctly.
After ensuring that, the mountpoints updated.

Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NN4I4YMX6GNCCOJ43ATWZEVTHAEPRPJT/


[ovirt-users] change connection string in db

2019-09-28 Thread olaf . buitelaar
Dear oVirt users,

I'm currently migrating our gluster setup, so i've done a gluster replace brick 
to the new machines.
Now i'm trying to update the connection strings of the related storage domains 
including the one hosting the ovirt-engine (which i believe cannot be brought 
down for maintenance). At the same time i'm trying to disable the "Use managed 
gluster volume" feature.
i've had tested this in a lab setup, but somehow i'm running into issues on the 
actual setup.

On the lab setup it was enough to run a query like this;
UPDATE public.storage_server_connections
SET 
"connection"='10.201.0.6:/ovirt-kube',gluster_volume_id=NULL,mount_options='backup-volfile-servers=10.201.0.1:10.201.0.2:10.201.0.3:10.201.0.5:10.201.0.4:10.201.0.7:10.201.0.8:10.201.0.9'
WHERE id='29aae3ce-61e4-4fcd-a8f2-ab0a0c07fa48';
on the live setup i also seem to run a query like this;
UPDATE public.gluster_volumes
SET task_id=NULL
WHERE id='9a552d7a-8a0d-4bae-b5a2-1cb8a7edf5c9';
i couldn't really find where this task_id relates to, but it does make the 
checkbox for "Use managed gluster volume" being unchecked in the web interface.

in the lab setup it was enough to run within the hosted engine;
- service ovirt-engine restart
and then bring an ovirt-host machine to maintenance, and active it again. and 
the changed connection string was being mounted in the 
/rhev/data-center/mnt/glusterSD/ directory.
Also the vm's after being shutdown and brought up again, started using the new 
connection string.

But now on the production instance, when i restart the engine the connection 
string is restored to the original values in the storage_server_connections 
table. I don't really understand where the engine gathers this information from.
Any advice on how to actually change the connection strings would by highly 
appreciated.

Thanks Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QX7RFHBYWRFSDTUPMHN5RZCNH6A4RPX6/


[ovirt-users] Re: HostedEngine cleaned up

2019-05-10 Thread olaf . buitelaar
Hi Dimitry,

Sorry for not being clearer, I've missed the part the ls was from the 
underlying brick. Than i've clearly a different issue.

Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TEOTYIREXKDLU63CGYSCDWLCRECWJBGO/


[ovirt-users] Re: HostedEngine cleaned up

2019-05-09 Thread olaf . buitelaar
This listing is from a gluster mount not from the underlying brick, which 
should combine all parts from the underlying .glusterfs folder. I believe when 
you make use of the feature.shard the files should be broken up in peaces 
according the shard-size.

Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIEAGLEN67EGQTKGPHA6MKV23EUIABBG/


[ovirt-users] Re: HostedEngine cleaned up

2019-05-09 Thread olaf . buitelaar
It looks like i've got the exact same issue;
drwxr-xr-x.  2 vdsm kvm 4.0K Mar 29 16:01 .
drwxr-xr-x. 22 vdsm kvm 4.0K Mar 29 18:34 ..
-rw-rw.  1 vdsm kvm  64M Feb  4 01:32 44781cef-173a-4d84-88c5-18f7310037b4
-rw-rw.  1 vdsm kvm 1.0M Oct 16  2018 
44781cef-173a-4d84-88c5-18f7310037b4.lease
-rw-r--r--.  1 vdsm kvm  311 Mar 29 16:00 
44781cef-173a-4d84-88c5-18f7310037b4.meta
Within the meta file the image is marked legal and reports a size of 
SIZE=41943040, interestingly the format is mark RAW, while it was a thinly 
created volume.
My suspicion is that something went wrong while the volume was being 
livemigrated, and somehow the merging of the images broke the volume.   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDPJ2R5XUFNZR4NO625HZCKXD2V3HE6N/


[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-05-02 Thread olaf . buitelaar
Sorry it appears the messages about;  Get Host Statistics failed: Internal 
JSON-RPC error: {'reason': '[Errno 19] veth18ae509 is not present in the 
system'} aren't gone, just are happening much less frequent.

Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2EIX6GDT5DXUTARXYXYUH2OV6N55XUJ7/


[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-04-29 Thread olaf . buitelaar
Dear Mohit,

I've upgraded to gluster 5.6, however the starting of multiple glusterfsd 
processed per brick doesn't seems to be fully resolved yet. However it does 
seem to happen less than before. Also in some cases glusterd did seem to detect 
a glusterfsd was running, but decided it was not valid. It was reproducible on 
all my machines after a reboot, but only a few bricks seemed to be affected. 
I'm running about 14 bricks per machine, and only 1 - 3 were affected. The ones 
with 3 full  bricks, seemed tp suffer most. Also in some cases a restart of the 
glusterd service did spawn multiple glusterfsd processed for the same bricks 
configured on the node. 

See for example logs;
[2019-04-19 17:49:50.853099] I [glusterd-utils.c:6214:glusterd_brick_start] 
0-management: discovered already-running brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-19 17:50:33.302239] I [glusterd-utils.c:6214:glusterd_brick_start] 
0-management: discovered already-running brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-19 17:56:11.287692] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-19 17:57:12.699967] I [glusterd-utils.c:6184:glusterd_brick_start] 
0-management: Either pid 14884 is not running or brick path 
/data/gfs/bricks/brick1/ovirt-core is not consumed so cleanup pidfile
[2019-04-19 17:57:12.700150] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-19 18:02:58.420870] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-19 18:03:29.420891] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-19 18:48:14.046029] I [glusterd-utils.c:6214:glusterd_brick_start] 
0-management: discovered already-running brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-19 18:55:04.508606] I [glusterd-utils.c:6214:glusterd_brick_start] 
0-management: discovered already-running brick 
/data/gfs/bricks/brick1/ovirt-core

or

[2019-04-18 17:00:00.665476] I [glusterd-utils.c:6214:glusterd_brick_start] 
0-management: discovered already-running brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 17:00:32.799529] I [glusterd-utils.c:6214:glusterd_brick_start] 
0-management: discovered already-running brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 17:02:38.271880] I [glusterd-utils.c:6214:glusterd_brick_start] 
0-management: discovered already-running brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 17:08:32.867046] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 17:09:00.440336] I [glusterd-utils.c:6184:glusterd_brick_start] 
0-management: Either pid 9278 is not running or brick path 
/data/gfs/bricks/brick1/ovirt-core is not consumed so cleanup pidfile
[2019-04-18 17:09:00.440476] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 17:09:07.644070] I [glusterd-utils.c:6184:glusterd_brick_start] 
0-management: Either pid 24126 is not running or brick path 
/data/gfs/bricks/brick1/ovirt-core is not consumed so cleanup pidfile
[2019-04-18 17:09:07.644184] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 17:09:13.785798] I [glusterd-utils.c:6184:glusterd_brick_start] 
0-management: Either pid 27197 is not running or brick path 
/data/gfs/bricks/brick1/ovirt-core is not consumed so cleanup pidfile
[2019-04-18 17:09:13.785918] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 17:09:24.344561] I [glusterd-utils.c:6184:glusterd_brick_start] 
0-management: Either pid 28468 is not running or brick path 
/data/gfs/bricks/brick1/ovirt-core is not consumed so cleanup pidfile
[2019-04-18 17:09:24.344675] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 17:37:07.150799] I [glusterd-utils.c:6214:glusterd_brick_start] 
0-management: discovered already-running brick 
/data/gfs/bricks/brick1/ovirt-core
[2019-04-18 18:17:23.203719] I [glusterd-utils.c:6301:glusterd_brick_start] 
0-management: starting a fresh brick process for brick 
/data/gfs/bricks/brick1/ovirt-core

Again the the procedure to resolve this, was kill all the glusterfsd processed 
for the brick, and do a gluster v  start force, which resulted in only 1 
processes being started.

After the upgrade to 5.6 i do notice a small performance improvement of around 
15%, but it's still far from 3.12.15. I 

[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-04-03 Thread Olaf Buitelaar
Dear  Mohit,

Thanks for backporting this issue. Hopefully we can address the others as
well, if i can do anything let me know.
On my side i've tested with: gluster volume reset 
cluster.choose-local, but haven't noticed really a change in performance.
On the good side, the brick processes didn't crash with updating this
config.
I'll experiment with the other changes as well, and see how the
combinations affect performance.
I also saw this commit; https://review.gluster.org/#/c/glusterfs/+/21333/
which looks very useful, will this be an recommended option for VM/block
workloads?

Thanks Olaf


Op wo 3 apr. 2019 om 17:56 schreef Mohit Agrawal :

>
> Hi,
>
> Thanks Olaf for sharing the relevant logs.
>
> @Atin,
> You are right patch https://review.gluster.org/#/c/glusterfs/+/22344/
> will resolve the issue running multiple brick instance for same brick.
>
> As we can see in below logs glusterd is trying to start the same brick
> instance twice at the same time
>
> [2019-04-01 10:23:21.752401] I
> [glusterd-utils.c:6301:glusterd_brick_start] 0-management: starting a fresh
> brick process for brick /data/gfs/bricks/brick1/ovirt-engine
> [2019-04-01 10:23:30.348091] I
> [glusterd-utils.c:6301:glusterd_brick_start] 0-management: starting a fresh
> brick process for brick /data/gfs/bricks/brick1/ovirt-engine
> [2019-04-01 10:24:13.353396] I
> [glusterd-utils.c:6301:glusterd_brick_start] 0-management: starting a fresh
> brick process for brick /data/gfs/bricks/brick1/ovirt-engine
> [2019-04-01 10:24:24.253764] I
> [glusterd-utils.c:6301:glusterd_brick_start] 0-management: starting a fresh
> brick process for brick /data/gfs/bricks/brick1/ovirt-engine
>
> We are seeing below message between starting of two instances
> The message "E [MSGID: 101012] [common-utils.c:4075:gf_is_service_running]
> 0-: Unable to read pidfile:
> /var/run/gluster/vols/ovirt-engine/10.32.9.5-data-gfs-bricks-brick1-ovirt-engine.pid"
> repeated 2 times between [2019-04-01 10:23:21.748492] and [2019-04-01
> 10:23:21.752432]
>
> I will backport the same.
> Thanks,
> Mohit Agrawal
>
> On Wed, Apr 3, 2019 at 3:58 PM Olaf Buitelaar 
> wrote:
>
>> Dear Mohit,
>>
>> Sorry i thought Krutika was referring to the ovirt-kube brick logs. due
>> the large size (18MB compressed), i've placed the files here;
>> https://edgecastcdn.net/0004FA/files/bricklogs.tar.bz2
>> Also i see i've attached the wrong files, i intended to
>> attach profile_data4.txt | profile_data3.txt
>> Sorry for the confusion.
>>
>> Thanks Olaf
>>
>> Op wo 3 apr. 2019 om 04:56 schreef Mohit Agrawal :
>>
>>> Hi Olaf,
>>>
>>>   As per current attached "multi-glusterfsd-vol3.txt |
>>> multi-glusterfsd-vol4.txt" it is showing multiple processes are running
>>>   for "ovirt-core ovirt-engine" brick names but there are no logs
>>> available in bricklogs.zip specific to this bricks, bricklogs.zip
>>>   has a dump of ovirt-kube logs only
>>>
>>>   Kindly share brick logs specific to the bricks "ovirt-core
>>> ovirt-engine" and share glusterd logs also.
>>>
>>> Regards
>>> Mohit Agrawal
>>>
>>> On Tue, Apr 2, 2019 at 9:18 PM Olaf Buitelaar 
>>> wrote:
>>>
>>>> Dear Krutika,
>>>>
>>>> 1.
>>>> I've changed the volume settings, write performance seems to increased
>>>> somewhat, however the profile doesn't really support that since latencies
>>>> increased. However read performance has diminished, which does seem to be
>>>> supported by the profile runs (attached).
>>>> Also the IO does seem to behave more consistent than before.
>>>> I don't really understand the idea behind them, maybe you can explain
>>>> why these suggestions are good?
>>>> These settings seems to avoid as much local caching and access as
>>>> possible and push everything to the gluster processes. While i would expect
>>>> local access and local caches are a good thing, since it would lead to
>>>> having less network access or disk access.
>>>> I tried to investigate these settings a bit more, and this is what i
>>>> understood of them;
>>>> - network.remote-dio; when on it seems to ignore the O_DIRECT flag in
>>>> the client, thus causing the files to be cached and buffered in the page
>>>> cache on the client, i would expect this to be a good thing especially if
>>>> the server process would access the same page cache?
>>>> At least that is what grasp from this commit;
>>>> htt

[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-03-28 Thread olaf . buitelaar
Forgot one more issue with ovirt, on some hypervisor nodes we also run docker, 
it appears vdsm tries to get an hold of the interfaces docker creates/removes 
and this is spamming the vdsm and engine logs with;
Get Host Statistics failed: Internal JSON-RPC error: {'reason': '[Errno 19] 
veth7611c53 is not present in the system'}
Couldn’t really find a way to let vdsm ignore those interfaces.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KTAVSJKLVHF7EVPKAJFXPRAJPL6Z5KYZ/


[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-03-28 Thread olaf . buitelaar
Dear All,

I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While previous 
upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a different 
experience. After first trying a test upgrade on a 3 node setup, which went 
fine. i headed to upgrade the 9 node production platform, unaware of the 
backward compatibility issues between gluster 3.12.15 -> 5.3. After upgrading 2 
nodes, the HA engine stopped and wouldn't start. Vdsm wasn't able to mount the 
engine storage domain, since /dom_md/metadata was missing or couldn't be 
accessed. Restoring this file by getting a good copy of the underlying bricks, 
removing the file from the underlying bricks where the file was 0 bytes and 
mark with the stickybit, and the corresponding gfid's. Removing the file from 
the mount point, and copying back the file on the mount point. Manually 
mounting the engine domain,  and manually creating the corresponding symbolic 
links in /rhev/data-center and /var/run/vdsm/storage and fixing the ownership 
back to vdsm.kvm (which was root.root), i was able to start the HA engine 
again. Since the engine was up again, and things seemed rather unstable i 
decided to continue the upgrade on the other nodes suspecting an 
incompatibility in gluster versions, i thought would be best to have them all 
on the same version rather soonish. However things went from bad to worse, the 
engine stopped again, and all vm’s stopped working as well.  So on a machine 
outside the setup and restored a backup of the engine taken from version 4.2.8 
just before the upgrade. With this engine I was at least able to start some 
vm’s again, and finalize the upgrade. Once the upgraded, things didn’t 
stabilize and also lose 2 vm’s during the process due to image corruption. 
After figuring out gluster 5.3 had quite some issues I was as lucky to see 
gluster 5.5 was about to be released, on the moment the RPM’s were available 
I’ve installed those. This helped a lot in terms of stability, for which I’m 
very grateful! However the performance is unfortunate terrible, it’s about 15% 
of what the performance was running gluster 3.12.15. It’s strange since a 
simple dd shows ok performance, but our actual workload doesn’t. While I would 
expect the performance to be better, due to all improvements made since gluster 
version 3.12. Does anybody share the same experience?
I really hope gluster 6 will soon be tested with ovirt and released, and things 
start to perform and stabilize again..like the good old days. Of course when I 
can do anything, I’m happy to help.

I think the following short list of issues we have after the migration;
Gluster 5.5;
-   Poor performance for our workload (mostly write dependent)
-   VM’s randomly pause on unknown storage errors, which are “stale 
file’s”. corresponding log; Lookup on shard 797 failed. Base file gfid = 
8a27b91a-ff02-42dc-bd4c-caa019424de8 [Stale file handle]
-   Some files are listed twice in a directory (probably related the stale 
file issue?)
Example;
ls -la  
/rhev/data-center/59cd53a9-0003-02d7-00eb-01e3/313f5d25-76af-4ecd-9a20-82a2fe815a3c/images/4add6751-3731-4bbd-ae94-aaeed12ea450/
total 3081
drwxr-x---.  2 vdsm kvm4096 Mar 18 11:34 .
drwxr-xr-x. 13 vdsm kvm4096 Mar 19 09:42 ..
-rw-rw.  1 vdsm kvm 1048576 Mar 28 12:55 
1a7cf259-6b29-421d-9688-b25dfaafb13c
-rw-rw.  1 vdsm kvm 1048576 Mar 28 12:55 
1a7cf259-6b29-421d-9688-b25dfaafb13c
-rw-rw.  1 vdsm kvm 1048576 Jan 27  2018 
1a7cf259-6b29-421d-9688-b25dfaafb13c.lease
-rw-r--r--.  1 vdsm kvm 290 Jan 27  2018 
1a7cf259-6b29-421d-9688-b25dfaafb13c.meta
-rw-r--r--.  1 vdsm kvm 290 Jan 27  2018 
1a7cf259-6b29-421d-9688-b25dfaafb13c.meta

- brick processes sometimes starts multiple times. Sometimes I’ve 5 brick 
processes for a single volume. Killing all glusterfsd’s for the volume on the 
machine and running gluster v start  force usually just starts one after 
the event, from then on things look all right. 

Ovirt 4.3.2.1-1.el7
-   All vms images ownership are changed to root.root after the vm is 
shutdown, probably related to; 
https://bugzilla.redhat.com/show_bug.cgi?id=1666795 but not only scoped to the 
HA engine. I’m still in compatibility mode 4.2 for the cluster and for the 
vm’s, but upgraded to version ovirt 4.3.2
-   The network provider is set to ovn, which is fine..actually cool, only 
the “ovs-vswitchd” is a CPU hog, and utilizes 100%
-   It seems on all nodes vdsm tries to get the the stats for the HA 
engine, which is filling the logs with (not sure if this is new);
[api.virt] FINISH getStats return={'status': {'message': "Virtual machine does 
not exist: {'vmId': u'20d69acd-edfd-4aeb-a2ae-49e9c121b7e9'}", 'code': 1}} 
from=::1,59290, vmId=20d69acd-edfd-4aeb-a2ae-49e9c121b7e9 (api:54)
-   It seems the package os_brick [root] managedvolume not supported: 
Managed Volume Not Supported. Missing package os-brick.: ('Cannot import 
os_brick',) (caps:149)  which 

[ovirt-users] Re: [Gluster-users] VMs paused - unknown storage error - Stale file handle - distribute 2 - replica 3 volume with sharding

2019-01-17 Thread olaf . buitelaar
Hi Marco,

It looks like I'm suffering form the same issue, see; 
https://lists.gluster.org/pipermail/gluster-users/2019-January/035602.html
I've included a simple github gist there, which you can run on the machines 
with the stale shards. 
However i haven't tested the full purge, it works well on individual 
files/shards.

Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HIRA3BJQPWLEPCX2YL4POVZA4FJLWQGO/