[ovirt-users] list-view instead of tiled-view in oVirt VM Portal?

2022-05-12 Thread Goorkate, B.J.
Hi all,

Is it possible to have a list of vm's in the oVirt VM-portal instead of the 
tiled view? 
(can't find it, but may have overlooked)

Thanks!

Regards,

Bertjan


--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUZE2RWZM4GFB4ITVWDSO5NPE2WPOSIB/


[ovirt-users] Re: [Suspected Spam] Re: Re: [Suspected Spam] Re: [Suspected Spam] Re: [Suspected Spam] Re: Unsynced entries do not self-heal during upgrade from oVirt 4.2 -> 4.3

2020-02-20 Thread Goorkate, B.J.
Hi,

Sorry for the delayed response...

After we migrated the gluster bricks to already upgraded nodes, everything was 
fine and it seems the cluster is healthy again.

Thanks for the help!

Regards,

Bertjan


On Fri, Feb 07, 2020 at 03:56:35PM +0200, Strahil Nikolov wrote:
> On February 7, 2020 11:28:17 AM GMT+02:00, "Goorkate, B.J." 
>  wrote:
> >Hi,
> >
> >There is just one unsynced entry which comes back every time: the
> >dom_md/ids file.
> >
> >When I add/delete a VM it's gone for a short while, but then it
> >reappears.
> >
> >I did a brick replace to a node which is already upgraded. After that
> >I'll do the same with the brick on the remaining not-yet-upgraded node,
> >just to be sure. Luckily we have the hardware for it.
> >
> >We're migrating from 4.2.8 (I think) to 4.3.7.2.
> >
> >Regards,
> >
> >Bertjan
> >
> >
> >On Thu, Feb 06, 2020 at 12:43:48PM +0200, Strahil Nikolov wrote:
> >> On February 6, 2020 9:42:13 AM GMT+02:00, "Goorkate, B.J."
> > wrote:
> >> >Hi Strahil,
> >> >
> >> >The 0-xlator-message still occurs, but not frequent. Yesterdag a
> >couple
> >> >of times, but the 4th there were no entries at all. 
> >> >
> >> >What I did find out, was that the unsynced entries belonged to
> >> >VM-images which were on specific hosts. 
> >> >
> >> >Yesterday I migrated them all to other hosts and the unsynced
> >entries
> >> >were gone except for 3. After a 'stat' of those files/directories,
> >they
> >> >were gone too.
> >> >
> >> >I think I can migrate the remaining hosts now. An option would be to
> >> >move the bricks of the not-yet-upgraded hosts to upgraded hosts. I
> >have
> >> >spare disks. What do you think?
> >> >
> >> >Regards,
> >> >
> >> >Bertjan
> >> >
> >> >
> >> >
> >> >
> >> >On Tue, Feb 04, 2020 at 06:40:21PM +, Strahil Nikolov wrote:
> >> >>Did you manage to fix the issue with the 0-xlator ? If yes,
> >most
> >> >probably
> >> >>the issue will be OK. Yet 'probably' doesn't meant that they
> >> >'will' be OK.
> >> >>If it was my lab - I would go ahead only if the 0-xlator issue
> >is
> >> >>over.Yet, a lab is a different thing than prod - so it is your
> >> >sole
> >> >>decision. 
> >> >>Did you test the upgrade prior moving to Prod ?
> >> >>About the hooks - I had such issue before and I had to
> >reinstall
> >> >the
> >> >>gluster rpms to solve it.
> >> >>Best Regards,
> >> >>Strahil Nikolov
> >> >>В вторник, 4 февруари 2020 г., 16:35:27 ч. Гринуич+2, Goorkate,
> >> >B.J.
> >> >> написа:
> >> >>Hi Strahil,
> >> >> 
> >> >>Thanks for your time so far!
> >> >> 
> >> >>The packages seem fine on all of the 3 nodes.
> >> >>Only /var/lib/glusterd/glusterd.info is modified and on the not
> >> >yet
> >> >>upgraded nodes these files are missing:
> >> >> 
> >> >>missing/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
> >> >>missing/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
> >> >>missing   
> >/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
> >> >>missing   
> >/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
> >> >>missing/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
> >> >> 
> >> >>But that doesn't seem relevant...
> >> >> 
> >> >>I stopped a couple of virtual machines with image files with
> >> >unsynched
> >> >>entries. When they were turned off, I couldn't find them
> >anymore
> >> >in the
> >> >>unsyced entries list. When I turned them on again, some
> >> >re-appeared, some
> >> >>didn't.
> >> >> 
> >> >>I really don't know where to look next.
> >> >> 
> >> >>The big question is: will the problems be resolved when I
> >upgrade
> >> >the two
> >> >>remaining nodes, or

[ovirt-users] Re: [Suspected Spam] Re: [Suspected Spam] Re: [Suspected Spam] Re: Unsynced entries do not self-heal during upgrade from oVirt 4.2 -> 4.3

2020-02-07 Thread Goorkate, B.J.
Hi,

There is just one unsynced entry which comes back every time: the dom_md/ids 
file.

When I add/delete a VM it's gone for a short while, but then it reappears.

I did a brick replace to a node which is already upgraded. After that I'll do 
the same with the brick on the remaining not-yet-upgraded node, just to be 
sure. Luckily we have the hardware for it.

We're migrating from 4.2.8 (I think) to 4.3.7.2.

Regards,

Bertjan


On Thu, Feb 06, 2020 at 12:43:48PM +0200, Strahil Nikolov wrote:
> On February 6, 2020 9:42:13 AM GMT+02:00, "Goorkate, B.J." 
>  wrote:
> >Hi Strahil,
> >
> >The 0-xlator-message still occurs, but not frequent. Yesterdag a couple
> >of times, but the 4th there were no entries at all. 
> >
> >What I did find out, was that the unsynced entries belonged to
> >VM-images which were on specific hosts. 
> >
> >Yesterday I migrated them all to other hosts and the unsynced entries
> >were gone except for 3. After a 'stat' of those files/directories, they
> >were gone too.
> >
> >I think I can migrate the remaining hosts now. An option would be to
> >move the bricks of the not-yet-upgraded hosts to upgraded hosts. I have
> >spare disks. What do you think?
> >
> >Regards,
> >
> >Bertjan
> >
> >
> >
> >
> >On Tue, Feb 04, 2020 at 06:40:21PM +, Strahil Nikolov wrote:
> >>Did you manage to fix the issue with the 0-xlator ? If yes, most
> >probably
> >>the issue will be OK. Yet 'probably' doesn't meant that they
> >'will' be OK.
> >>If it was my lab - I would go ahead only if the 0-xlator issue is
> >>over.Yet, a lab is a different thing than prod - so it is your
> >sole
> >>decision. 
> >>Did you test the upgrade prior moving to Prod ?
> >>About the hooks - I had such issue before and I had to reinstall
> >the
> >>gluster rpms to solve it.
> >>Best Regards,
> >>Strahil Nikolov
> >>В вторник, 4 февруари 2020 г., 16:35:27 ч. Гринуич+2, Goorkate,
> >B.J.
> >> написа:
> >>Hi Strahil,
> >> 
> >>Thanks for your time so far!
> >> 
> >>The packages seem fine on all of the 3 nodes.
> >>Only /var/lib/glusterd/glusterd.info is modified and on the not
> >yet
> >>upgraded nodes these files are missing:
> >> 
> >>missing/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
> >>missing/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
> >>missing/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
> >>missing/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
> >>missing/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
> >> 
> >>But that doesn't seem relevant...
> >> 
> >>I stopped a couple of virtual machines with image files with
> >unsynched
> >>entries. When they were turned off, I couldn't find them anymore
> >in the
> >>unsyced entries list. When I turned them on again, some
> >re-appeared, some
> >>didn't.
> >> 
> >>I really don't know where to look next.
> >> 
> >>The big question is: will the problems be resolved when I upgrade
> >the two
> >>remaining nodes, or will it get worse?
> >> 
> >>Regards,
> >> 
> >>Bertjan
> >> 
> >>On Sun, Feb 02, 2020 at 08:06:58PM +, Strahil Nikolov wrote:
> >>>It seems that something went "off".
> >>>That '0-xlator: dlsym(xlator_api) missing:
> >>>/usr/lib64/glusterfs/6.6/rpc-transport/socket.so: undefined
> >symbol:
> >>>xlator_api' is really worrisome.
> >>>I think that you might have a bad package and based on the
> >info it
> >>could
> >>>be glusterfs-libs which should provide
> >/usr/lib64/libgfrpc.so.0 . I'm
> >>>currently on gluster v7.0 and I can't check it on my
> >installation.
> >>>Run a for loop to check the rpms:
> >>>for i in $(rpm -qa | grep gluster) ; do echo "$i :" ; rpm -V
> >$i';
> >>>echo;echo; done
> >>>Most probably you can safely redo (on the upgraded node) the
> >last yum
> >>>transaction:
> >>>yum history
> >>>yum history info  -> ve

[ovirt-users] Re: [Suspected Spam] Re: [Suspected Spam] Re: Unsynced entries do not self-heal during upgrade from oVirt 4.2 -> 4.3

2020-02-05 Thread Goorkate, B.J.
Hi Strahil,

The 0-xlator-message still occurs, but not frequent. Yesterdag a couple of 
times, but the 4th there were no entries at all. 

What I did find out, was that the unsynced entries belonged to VM-images which 
were on specific hosts. 

Yesterday I migrated them all to other hosts and the unsynced entries were gone 
except for 3. After a 'stat' of those files/directories, they were gone too.

I think I can migrate the remaining hosts now. An option would be to move the 
bricks of the not-yet-upgraded hosts to upgraded hosts. I have spare disks. 
What do you think?

Regards,

Bertjan




On Tue, Feb 04, 2020 at 06:40:21PM +, Strahil Nikolov wrote:
>Did you manage to fix the issue with the 0-xlator ? If yes, most probably
>the issue will be OK. Yet 'probably' doesn't meant that they 'will' be OK.
>If it was my lab - I would go ahead only if the 0-xlator issue is
>over.Yet, a lab is a different thing than prod - so it is your sole
>decision. 
>Did you test the upgrade prior moving to Prod ?
>About the hooks - I had such issue before and I had to reinstall the
>gluster rpms to solve it.
>Best Regards,
>Strahil Nikolov
>    В вторник, 4 февруари 2020 г., 16:35:27 ч. Гринуич+2, Goorkate, B.J.
> написа:
>Hi Strahil,
> 
>Thanks for your time so far!
> 
>The packages seem fine on all of the 3 nodes.
>Only /var/lib/glusterd/glusterd.info is modified and on the not yet
>upgraded nodes these files are missing:
> 
>missing/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
>missing/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
>missing/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
>missing/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
>missing/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh
> 
>But that doesn't seem relevant...
> 
>I stopped a couple of virtual machines with image files with unsynched
>entries. When they were turned off, I couldn't find them anymore in the
>unsyced entries list. When I turned them on again, some re-appeared, some
>didn't.
> 
>I really don't know where to look next.
> 
>The big question is: will the problems be resolved when I upgrade the two
>remaining nodes, or will it get worse?
> 
>Regards,
> 
>Bertjan
> 
>On Sun, Feb 02, 2020 at 08:06:58PM +, Strahil Nikolov wrote:
>>It seems that something went "off".
>>That '0-xlator: dlsym(xlator_api) missing:
>>/usr/lib64/glusterfs/6.6/rpc-transport/socket.so: undefined symbol:
>>xlator_api' is really worrisome.
>>I think that you might have a bad package and based on the info it
>could
>>be glusterfs-libs which should provide /usr/lib64/libgfrpc.so.0 . I'm
>>currently on gluster v7.0 and I can't check it on my installation.
>>Run a for loop to check the rpms:
>>for i in $(rpm -qa | grep gluster) ; do echo "$i :" ; rpm -V $i';
>>echo;echo; done
>>Most probably you can safely redo (on the upgraded node) the last yum
>>transaction:
>>yum history
>>yum history info  -> verify the gluster packages were installed
>here
>>yum history redo 
>>If kernel, glibc, systemd were not in this transaction , you can stop
>>gluster and start it again:
>>Node in maintenance in oVirt
>>systemctl stop ovirt-ha-agent ovirt-ha-broker vdsmd supervdsmd
>>systemctl stop sanlock
>>systemctl stop glusterd
>>/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
>>And then power up again:
>>systemctl start glusterd
>>gluster volume status -> check all connected
>>systemctl start sanlock supervdsmd vdsmd
>>systemctl start ovirt-ha-broker ovirt-ha-agent
>>Check situation.
>>Yet, you need to make gluster stop complaining , before you can take
>care
>>of the heal. Usually 'rsync' is my best friend - but this is when
>gluster
>>is working normally - and your case is far from normal.
>>If redo doesn't work for you -> try the "yum history rollback" to
>recover
>>to last good state.
>>I think that 'BOOM Boot Manager' is best in such cases.
>>Note: Never take any of my words for granted. I'm not running oVirt
>in
>>production and some of my methods might not be OK for you

[ovirt-users] Re: [Suspected Spam] Re: Unsynced entries do not self-heal during upgrade from oVirt 4.2 -> 4.3

2020-02-04 Thread Goorkate, B.J.
Hi Strahil,

Thanks for your time so far!

The packages seem fine on all of the 3 nodes. 
Only /var/lib/glusterd/glusterd.info is modified and on the not yet upgraded 
nodes these files are missing: 

missing /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
missing /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
missing /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
missing /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
missing /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh

But that doesn't seem relevant...

I stopped a couple of virtual machines with image files with unsynched entries. 
When they were turned off, I couldn't find them anymore in the unsyced entries 
list. When I turned them on again, some re-appeared, some didn't. 

I really don't know where to look next.

The big question is: will the problems be resolved when I upgrade the two 
remaining nodes, or will it get worse? 

Regards,

Bertjan


On Sun, Feb 02, 2020 at 08:06:58PM +, Strahil Nikolov wrote:
>It seems that something went "off".
>That '0-xlator: dlsym(xlator_api) missing:
>/usr/lib64/glusterfs/6.6/rpc-transport/socket.so: undefined symbol:
>xlator_api' is really worrisome.
>I think that you might have a bad package and based on the info it could
>be glusterfs-libs which should provide /usr/lib64/libgfrpc.so.0 . I'm
>currently on gluster v7.0 and I can't check it on my installation.
>Run a for loop to check the rpms:
>for i in $(rpm -qa | grep gluster) ; do echo "$i :" ; rpm -V $i';
>echo;echo; done
>Most probably you can safely redo (on the upgraded node) the last yum
>transaction:
>yum history 
>yum history info  -> verify the gluster packages were installed here
>yum history redo  
>If kernel, glibc, systemd were not in this transaction , you can stop
>gluster and start it again:
>Node in maintenance in oVirt
>systemctl stop ovirt-ha-agent ovirt-ha-broker vdsmd supervdsmd 
>systemctl stop sanlock
>systemctl stop glusterd
>/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
>And then power up again:
>systemctl start glusterd
>gluster volume status -> check all connected
>systemctl start sanlock supervdsmd vdsmd
>systemctl start ovirt-ha-broker ovirt-ha-agent
>Check situation.
>Yet, you need to make gluster stop complaining , before you can take care
>of the heal. Usually 'rsync' is my best friend - but this is when gluster
>is working normally - and your case is far from normal.
>If redo doesn't work for you -> try the "yum history rollback" to recover
>to last good state.
>I think that 'BOOM Boot Manager' is best in such cases.
>Note: Never take any of my words for granted. I'm not running oVirt in
>production and some of my methods might not be OK for your environment.
>Best Regards,
>Strahil Nikolov
>В неделя, 2 февруари 2020 г., 08:56:01 ч. Гринуич+2, Mark Lamers
> написа:
> 
>Hi Strahil,
> 
>Bertjan is not in the office today, so I will reply if okay with you.
> 
>First I like to describe the status of our network
> 
>There are three bricks:
> 
>and 5 nodes:
> 
>Every host has a management and migrate vlan iface  on a different bond
>iface.
> 
>The last octet of the ipaddress is similar
> 
>The output from 'gluster volume heal  info' gives a long list of
>shards from all three nodes, the file is attached as
>'gluster_volume_heal_info.txt'. The node without shards in the list is the
>updated node to gluster6, oVirt 4.3. That is curious i think.
> 
>Further more I find the following errors ' E '  in the glusterd.log of the
>upgrated host:
> 
>[2020-01-26 17:40:46.147730] E [MSGID: 106118]
>[glusterd-syncop.c:1896:gd_sync_task_begin] 0-management: Unable to
>acquire lock for vmstore1
>[2020-01-26 22:47:16.655651] E [MSGID: 106118]
>[glusterd-syncop.c:1896:gd_sync_task_begin] 0-management: Unable to
>acquire lock for vmstore1
>[2020-01-27 07:07:51.815490] E [MSGID: 106118]
>[glusterd-syncop.c:1896:gd_sync_task_begin] 0-management: Unable to
>acquire lock for vmstore1
>[2020-01-27 18:28:14.953974] E [MSGID: 106118]
>[glusterd-syncop.c:1896:gd_sync_task_begin] 0-management: Unable to
>acquire lock for vmstore1
>[2020-01-27 18:58:22.629457] E [MSGID: 106118]
>[glusterd-op-sm.c:4133:glusterd_op_ac_lock] 0-management: Unable to
>acquire lock for vmstore1
>[2020-01-27 18:58:22.629595] E [MSGID: 106376]
>[glusterd-op-sm.c:8150:glusterd_op_sm] 0-management: handler returned: -1
>[2020-01-27 18:58:22.756430] E [MSGID: 106117]
>[glusterd-op-sm.c:4189:glusterd_op_ac_unlock] 0-management: Unable to
>release lock for vmstore1
>[2020-01-27 18:58:22.756581] E [MSGID: 106376]
>[glusterd-op-sm.c:8150:glusterd_op_sm] 0-management: handler returned: -1
>[2020-01-28 05:31:52.427196] E [MSG

[ovirt-users] Re: Unsynced entries do not self-heal during upgrade from oVirt 4.2 -> 4.3

2020-01-30 Thread Goorkate, B.J.
Hi,

Thanks for the info! 

I tried the full heal and the stat, but the unsynced entries still remain. 

Just to be sure: the find/stat command needs to be done on files in the 
fuse-mount, right?
Or on the brick-mount itself?

And other than 'gluster volume heal vmstore1 statistics', I cannot find a way 
to ensure that the full heal really started, let alone if it finished 
correctly...

Regards,

Bertjan

On Mon, Jan 27, 2020 at 08:11:14PM +0200, Strahil Nikolov wrote:
> On January 27, 2020 4:17:26 PM GMT+02:00, "Goorkate, B.J." 
>  wrote:
> >Hi all,
> >
> >I'm in the process of upgrading oVirt-nodes from 4.2 to 4.3. 
> >
> >After upgrading the first of 3 oVirt/gluster nodes, there are between
> >600-1200 unsynced entries for a week now on 1 upgraded node and one
> >not-yet-upgraded node. The third node (also not-yet-upgraded) says it's
> >OK (no unsynced entries).
> >
> >The cluster doesn't seem to be very busy, but somehow self-heal doesn't
> >complete.
> >
> >Is this because of different gluster versions across the nodes and will
> >it resolve as soon as I upgraded all nodes? Since it's our production
> >cluster, I don't want to take any risk...
> >
> >Does anybody recognise this problem? Of course I can provide more
> >information if necessary.
> >
> >Any hints on troubleshooting the unsynced entries are more than
> >welcome!
> >
> >Thanks in advance!
> >
> >Regards,
> >
> >Bertjan
> >
> >--
> >
> >De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> >uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
> >onterecht
> >ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender
> >direct
> >te informeren door het bericht te retourneren. Het Universitair Medisch
> >Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van
> >de W.H.W.
> >(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat
> >geregistreerd bij
> >de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> >
> >Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> >
> >--
> >
> >This message may contain confidential information and is intended
> >exclusively
> >for the addressee. If you receive this message unintentionally, please
> >do not
> >use the contents but notify the sender immediately by return e-mail.
> >University
> >Medical Center Utrecht is a legal person by public law and is
> >registered at
> >the Chamber of Commerce for Midden-Nederland under no. 30244197.
> >
> >Please consider the environment before printing this e-mail.
> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >https://lists.ovirt.org/archives/list/users@ovirt.org/message/OSF5DPTRS4WS3GG6JA6GOEQP6CGPOC5Y/
> 
> I don't  wabt to scare you, but I don't think it's related to the different 
> versions.
> 
> Have yiu tried the following:
> 1. Run 'gluster volume heal  full'
> 2. Run a stat to force an update from client side (wait for the full heal to 
> finish).
> find /rhev/data-center/mnt/glusterSD  -iname '*' -exec stat {} \;
> 
> Best Regards,
> Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QZ3HZNSNPDBX4BICKQUKQLDF7L2IMFYE/


[ovirt-users] Unsynced entries do not self-heal during upgrade from oVirt 4.2 -> 4.3

2020-01-27 Thread Goorkate, B.J.
Hi all,

I'm in the process of upgrading oVirt-nodes from 4.2 to 4.3. 

After upgrading the first of 3 oVirt/gluster nodes, there are between 600-1200 
unsynced entries for a week now on 1 upgraded node and one not-yet-upgraded 
node. The third node (also not-yet-upgraded) says it's OK (no unsynced entries).

The cluster doesn't seem to be very busy, but somehow self-heal doesn't 
complete.

Is this because of different gluster versions across the nodes and will it 
resolve as soon as I upgraded all nodes? Since it's our production cluster, I 
don't want to take any risk...

Does anybody recognise this problem? Of course I can provide more information 
if necessary.

Any hints on troubleshooting the unsynced entries are more than welcome!

Thanks in advance!

Regards,

Bertjan

--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OSF5DPTRS4WS3GG6JA6GOEQP6CGPOC5Y/


Re: [ovirt-users] When creating a gluster brick in oVirt, what is the reason for having to fill in the RAID-parameters?

2017-03-30 Thread Goorkate, B.J.
Hi,

Thanks for confirming this!

Is there a preferred stripe size for gluster bricks? The default for RAID-10 
seems to be 256KB, but the Dell RAID-controller defaults to 64KB.

Regards,

Bertjan

On Thu, Mar 30, 2017 at 04:30:31PM +0530, Sahina Bose wrote:
> 
> 
> On Wed, Mar 29, 2017 at 5:22 PM, Goorkate, B.J. 
> wrote:
> 
> Hi all,
> 
> When creating a gluster brick in oVirt, I have to fill in the parameters 
> of
> the
> RAID volume the brick is on (that's how I understand it anyway):
> RAID-type, number of disks and stripe size.
> 
> What is the reason for that? Is the gluster brick optimised for this
> parameters?
> tried to find information about this, but no luck yet..
> 
> 
> Yes, the LVM and filesystem are optimised as per best practice guidelines 
> based
> on the information provided.
> There's no way currently to determine the RAID parameters given a device, and
> hence we're asking the user to provide these inputs
>  
> 
> .
> 
> Thanks in advance!
> 
> Regards,
> 
> Bertjan Goorkate
> 
> 
> 
> 
> --
> 
> De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
> ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender
> direct
> te informeren door het bericht te retourneren. Het Universitair Medisch
> Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de
> W.H.W.
> (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd
> bij
> de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> 
> Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> 
> 
> --
> 
> This message may contain confidential information and is intended
> exclusively
> for the addressee. If you receive this message unintentionally, please do
> not
> use the contents but notify the sender immediately by return e-mail.
> University
> Medical Center Utrecht is a legal person by public law and is registered 
> at
> the Chamber of Commerce for Midden-Nederland under no. 30244197.
> 
> Please consider the environment before printing this e-mail.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] When creating a gluster brick in oVirt, what is the reason for having to fill in the RAID-parameters?

2017-03-29 Thread Goorkate, B.J.
Hi all,

When creating a gluster brick in oVirt, I have to fill in the parameters of the
RAID volume the brick is on (that's how I understand it anyway):
RAID-type, number of disks and stripe size.

What is the reason for that? Is the gluster brick optimised for this 
parameters? 
tried to find information about this, but no luck yet...

Thanks in advance!

Regards,

Bertjan Goorkate



--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage expansion

2017-01-23 Thread Goorkate, B.J.
Hi,

Thanks for the answer!

Adding nodes per 3 makes sense. The disadvantage is having multiple 
storage domains when you do that. Or is it possible to combine them?

Regards,

Bertjan


On Fri, Jan 20, 2017 at 11:27:28AM +0530, knarra wrote:
> On 01/19/2017 09:15 PM, Goorkate, B.J. wrote:
> > Hi all,
> > 
> > I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster 
> > storage domain for the virtual
> > machines.
> > 
> > Is there a way to use storage in the nodes which are no member of the 
> > replica-3 storage domain?
> > Or do I need another node and make a second replica-3 gluster storage 
> > domain?
> since  you have 5 nodes in your cluster, you could add another node and make
> replica-3 gluster storage domain out of these three nodes which are no
> member of the already existing replica-3 storage domain.
> > 
> > In other words: I would like to expand the existing storage domain by 
> > adding more nodes, rather
> > than adding disks to the existing gluster nodes. Is that possible?
> > 
> > Thanks!
> > 
> > Regards,
> > 
> > Bertjan
> > 
> > 
> > 
> > --
> > 
> > De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> > uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
> > ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
> > te informeren door het bericht te retourneren. Het Universitair Medisch
> > Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de 
> > W.H.W.
> > (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd 
> > bij
> > de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> > 
> > Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> > 
> > --
> > 
> > This message may contain confidential information and is intended 
> > exclusively
> > for the addressee. If you receive this message unintentionally, please do 
> > not
> > use the contents but notify the sender immediately by return e-mail. 
> > University
> > Medical Center Utrecht is a legal person by public law and is registered at
> > the Chamber of Commerce for Midden-Nederland under no. 30244197.
> > 
> > Please consider the environment before printing this e-mail.
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster storage expansion

2017-01-19 Thread Goorkate, B.J.
Hi all,

I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster 
storage domain for the virtual
machines.

Is there a way to use storage in the nodes which are no member of the replica-3 
storage domain?
Or do I need another node and make a second replica-3 gluster storage domain?

In other words: I would like to expand the existing storage domain by adding 
more nodes, rather
than adding disks to the existing gluster nodes. Is that possible?

Thanks!

Regards,

Bertjan 



--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-28 Thread Goorkate, B.J.
Hi,

I currently have a couple of VMs with little disk I/O, so I will
put them on the 4th node. 

I can even use the 4th node to deploy a brick if one of the replica-3
nodes fails.

Thanks!

Regards,

Bertjan

On Wed, Sep 28, 2016 at 11:50:21AM +0530, Sahina Bose wrote:
> 
> 
> On Tue, Sep 27, 2016 at 8:59 PM, Goorkate, B.J. 
> wrote:
> 
> Hi Sahina,
> 
> First: sorry for my delayed response. I wasn't able to respond earlier.
> 
> I already planned on adding the 4th node as a gluster client, so thank you
> for
> confirming that this works.
> 
> Why I was in doubt is that certain VMs with a lot of storage I/O on the 
> 4th
> node have to
> replicate to 3 other hosts (the replica-3 gluster nodes) over the storage
> network, while
> a VM on 1 of the replica-3 gluster nodes only has to replicate to two 
> other
> nodes over
> the network, thus creating less network traffic.
> 
> Does this make sense?
> 
> And if it does: can that be an issue?
> 
> 
> IIUC, the 4th node that you add to the cluster is serving only compute and
> there is no storage (bricks) capacity added. In this case, yes, all reads and
> writes are over the network - this is like a standard oVirt deployment where
> storage is over the network (non hyper converged).
> While thoeretically this looks like an issue, it may not be, as there are
> multiple factors affecting performance. You will need to measure the impact on
> guest performance when VMs run on this node and see if it is acceptable to 
> you.
> One thing you could do is schedule VMs  that do not have stringent perf
> requirements on the 4th node?
> 
> There are also improvements planned in upcoming releases of gluster which
> improve the I/O performance further (compound FOPS, libgfapi access), so
> whatever you see now should improve further.
> 
> 
> 
> Regards,
> 
> Bertjan
> 
> On Fri, Sep 23, 2016 at 04:47:25PM +0530, Sahina Bose wrote:
> >
> >
> > On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari 
> wrote:
> >
> > I'm struggling with the same problem (I say struggling because I'm
> still
> > having stability issues for what i consider a stable cluster) but 
> you
> can:
> > - create a replica 3 engine gluster volume
> > - create replica 2 data, iso and export volumes
> >
> >
> > What are the stability issues you're facing? Data volume if used as a
> data
> > storage domain should be a replica 3 volume as well.
> >
> >
> >
> > Deploy the hosted-engine on the first VM (with the engine volume)
> froom the
> > CLI, then log in Ovirt admin, enable gluster support, install *and
> deploy*
> > from the GUI host2 and host3 (where the engine bricks are) and then
> install
> > host4 without deploying. This should get you the 4 hosts online, but
> the
> > engine will run only on the first 3
> >
> >
> > Right. You can add the 4th node to the cluster, but not have any bricks
> on this
> > volume in which case VMs will be run on this node but will access data
> from the
> > other 3 nodes.
> >
> >
> >
> > 2016-09-23 11:14 GMT+02:00 Goorkate, B.J. 
>  >:
> >
> > Dear all,
> >
> > I've tried to find a way to add a 4th oVirt-node to my existing
> > 3-node setup with replica-3 gluster storage, but found no usable
> > solution yet.
> >
> > From what I read, it's not wise to create a replica-4 gluster
> > storage, because of bandwith overhead.
> >
> > Is there a safe way to do this and still have 4 equal oVirt
> nodes?
> >
> > Thanks in advance!
> >
> > Regards,
> >
> > Bertjan
> >
> > 
> > --
> >
> > De informatie opgenomen in dit bericht kan vertrouwelijk zijn en
> is
> > uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
> > onterecht
> > ontvangt, wordt u verzocht de inhoud niet te gebruiken en de
> afzender
> > direct
> > te informeren door het bericht te retourneren. Het Universitair
> Medisch
> > Centru

Re: [ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-27 Thread Goorkate, B.J.
Hi Sahina,

First: sorry for my delayed response. I wasn't able to respond earlier.

I already planned on adding the 4th node as a gluster client, so thank you for
confirming that this works.

Why I was in doubt is that certain VMs with a lot of storage I/O on the 4th 
node have to
replicate to 3 other hosts (the replica-3 gluster nodes) over the storage 
network, while
a VM on 1 of the replica-3 gluster nodes only has to replicate to two other 
nodes over
the network, thus creating less network traffic. 

Does this make sense?

And if it does: can that be an issue? 

Regards,

Bertjan

On Fri, Sep 23, 2016 at 04:47:25PM +0530, Sahina Bose wrote:
> 
> 
> On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari  wrote:
> 
> I'm struggling with the same problem (I say struggling because I'm still
> having stability issues for what i consider a stable cluster) but you can:
> - create a replica 3 engine gluster volume
> - create replica 2 data, iso and export volumes
> 
> 
> What are the stability issues you're facing? Data volume if used as a data
> storage domain should be a replica 3 volume as well.
>  
> 
> 
> Deploy the hosted-engine on the first VM (with the engine volume) froom 
> the
> CLI, then log in Ovirt admin, enable gluster support, install *and deploy*
> from the GUI host2 and host3 (where the engine bricks are) and then 
> install
> host4 without deploying. This should get you the 4 hosts online, but the
> engine will run only on the first 3
> 
> 
> Right. You can add the 4th node to the cluster, but not have any bricks on 
> this
> volume in which case VMs will be run on this node but will access data from 
> the
> other 3 nodes.
>  
> 
> 
> 2016-09-23 11:14 GMT+02:00 Goorkate, B.J. :
> 
> Dear all,
> 
> I've tried to find a way to add a 4th oVirt-node to my existing
> 3-node setup with replica-3 gluster storage, but found no usable
> solution yet.
> 
> From what I read, it's not wise to create a replica-4 gluster
> storage, because of bandwith overhead.
> 
> Is there a safe way to do this and still have 4 equal oVirt nodes?
> 
> Thanks in advance!
> 
> Regards,
> 
> Bertjan
> 
> 
> --
> 
> De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
> onterecht
> ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender
> direct
> te informeren door het bericht te retourneren. Het Universitair 
> Medisch
> Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van
> de W.H.W.
> (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat
> geregistreerd bij
> de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> 
> Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> 
> 
> --
> 
> This message may contain confidential information and is intended
> exclusively
> for the addressee. If you receive this message unintentionally, please
> do not
> use the contents but notify the sender immediately by return e-mail.
> University
> Medical Center Utrecht is a legal person by public law and is
> registered at
> the Chamber of Commerce for Midden-Nederland under no. 30244197.
> 
> Please consider the environment before printing this e-mail.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
> --
> Davide Ferrari
> Senior Systems Engineer
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 4-node oVirt with replica-3 gluster

2016-09-23 Thread Goorkate, B.J.
Dear all,

I've tried to find a way to add a 4th oVirt-node to my existing 
3-node setup with replica-3 gluster storage, but found no usable
solution yet.

>From what I read, it's not wise to create a replica-4 gluster
storage, because of bandwith overhead. 

Is there a safe way to do this and still have 4 equal oVirt nodes?

Thanks in advance!

Regards,

Bertjan

--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing the gluster mount point in ovirt 3.6

2016-09-01 Thread Goorkate, B.J.
Hi,

Thanks Edward Clay and Nir Soffer for the fast response!

It worked! I was focused on detaching the domain first and didn't see
the maintenance option.

Regards,

Bertjan

On Wed, Aug 31, 2016 at 09:09:02AM -0600, Edward Clay wrote:
> I believe if you put the SD in maintenance mode you can change the mount
> options and I believe the IP field.  I don't believe you should have to
> remove it and re-import it.  I could be wrong.
> 
> 
> If you click on the storage tab and select your SD in the bottom details
> pain you can put it into maintenance mode under datacenter.
> 
> 
> All VMs will have to be down to do this.
> 
> 
> On 08/31/2016 08:39 AM, Goorkate, B.J. wrote:
> > Hi all,
> >
> > I recently moved my gluster back end to a dedicated gluster network.
> >
> > Now I would like to change the mount point of my master Storage Domain
> > to point to that network too. Is that possible?
> >
> > I tried to detach the Storage Domain from the datacenter in order to
> > re-add or import it with the new IP-address, but I got the message:
> >
> > "Error while executing action: Cannot remove the master Storage Domain from
> > the Data Center without another active Storage Domain to take its place.
> > -Either activate another Storage Domain in the Data Center, or remove the
> > Data Center."
> >
> > Has anybody got an idea on how to solve this?
> >
> > Thanks in advance!
> >
> > Regards,
> >
> > Bertjan
> >
> > --
> >
> > De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
> > uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
> > ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
> > te informeren door het bericht te retourneren. Het Universitair Medisch
> > Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de 
> > W.H.W.
> > (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd 
> > bij
> > de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> >
> > Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> >
> > --
> >
> > This message may contain confidential information and is intended 
> > exclusively
> > for the addressee. If you receive this message unintentionally, please do 
> > not
> > use the contents but notify the sender immediately by return e-mail. 
> > University
> > Medical Center Utrecht is a legal person by public law and is registered at
> > the Chamber of Commerce for Midden-Nederland under no. 30244197.
> >
> > Please consider the environment before printing this e-mail.
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> -- 
> Best regards,
> Edward Clay
> Systems Administrator
> UK2 Group - US Operations
> Phone: 1-800-222-2165
> FAX: 435-755-3449
> E-mail: edward.c...@uk2group.com
>  
> Believe in Better Hosting
> http://www.westhost.com
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Changing the gluster mount point in ovirt 3.6

2016-08-31 Thread Goorkate, B.J.
Hi all,

I recently moved my gluster back end to a dedicated gluster network.

Now I would like to change the mount point of my master Storage Domain
to point to that network too. Is that possible?

I tried to detach the Storage Domain from the datacenter in order to
re-add or import it with the new IP-address, but I got the message:

"Error while executing action: Cannot remove the master Storage Domain from
the Data Center without another active Storage Domain to take its place.
-Either activate another Storage Domain in the Data Center, or remove the
Data Center."

Has anybody got an idea on how to solve this?

Thanks in advance!

Regards,

Bertjan

--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users