Re: [ovirt-users] NTP

2017-08-10 Thread Moacir Ferreira
Hi Sandro,


I found that I can install ntpd enabling the CentOS base repository that comes 
disabled by default in oVirt. This said, the GUI gdeploy's generated script for 
deploying the hosted-engine + GlusterFS is still expecting to disable chronny 
by enabling ntpd. So my question now is if we need/should keep ntpd or if we 
should just keep chronnyd.


Moacir



From: Sandro Bonazzola <sbona...@redhat.com>
Sent: Thursday, August 10, 2017 2:06 PM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] NTP



2017-08-07 16:53 GMT+02:00 Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>>:

I found that NTP does not get installed on oVirt node on the latest version 
ovirt-node-ng-installer-ovirt-4.1-2017052309<tel:(201)%20705-2309>.iso.


Also the installed repositories does not have it. So, is this a bug or NTP is 
not considered appropriated anymore?


vdsm is now requiring chronyd but we have re-added ntpd in ovirt-node for 4.1.5 
RC3 (https://bugzilla.redhat.com/1476650)
I'm finishing to test the release before announcing it today.





Thanks.

Moacir

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA<https://www.redhat.com/>

[https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]<https://red.ht/sig>
TRIED. TESTED. TRUSTED.<https://redhat.com/trusted>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NTP

2017-08-10 Thread Moacir Ferreira
In this case, please don't bother reintroducing ntpd as it makes more sense not 
have it if it should not be used. Also, the final ISO image gets smaller...

Moacir


From: Sandro Bonazzola <sbona...@redhat.com>
Sent: Thursday, August 10, 2017 2:39 PM
To: Moacir Ferreira; Sahina Bose; Sachidananda URS
Cc: users@ovirt.org
Subject: Re: [ovirt-users] NTP



2017-08-10 15:21 GMT+02:00 Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>>:

Hi Sandro,


I found that I can install ntpd enabling the CentOS base repository that comes 
disabled by default in oVirt. This said, the GUI gdeploy's generated script for 
deploying the hosted-engine + GlusterFS is still expecting to disable chronny 
by enabling ntpd. So my question now is if we need/should keep ntpd or if we 
should just keep chronnyd.


Looks like a gdeploy bug. Adding Sahina and Sacchi. chronyd should be used 
instead of ntpd.




Moacir



From: Sandro Bonazzola <sbona...@redhat.com<mailto:sbona...@redhat.com>>
Sent: Thursday, August 10, 2017 2:06 PM
To: Moacir Ferreira
Cc: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] NTP



2017-08-07 16:53 GMT+02:00 Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>>:

I found that NTP does not get installed on oVirt node on the latest version 
ovirt-node-ng-installer-ovirt-4.1-2017052309<tel:(201)%20705-2309>.iso.


Also the installed repositories does not have it. So, is this a bug or NTP is 
not considered appropriated anymore?


vdsm is now requiring chronyd but we have re-added ntpd in ovirt-node for 4.1.5 
RC3 (https://bugzilla.redhat.com/1476650)
I'm finishing to test the release before announcing it today.





Thanks.

Moacir

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA<https://www.redhat.com/>

[X]<https://red.ht/sig>
TRIED. TESTED. TRUSTED.<https://redhat.com/trusted>




--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA<https://www.redhat.com/>

[https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]<https://red.ht/sig>
TRIED. TESTED. TRUSTED.<https://redhat.com/trusted>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to shutdown an oVirt cluster with Gluster and hosted engine

2017-08-09 Thread Moacir Ferreira
That is great Kasturi, thanks!


I will go over these steps just to make sure nothing is missing. But it looks 
the right way to do it. The only step that looks strange is the step 6 to 
shutdown. I always though that if I shutdown a HA protected machine oVirt would 
then launch it back again. So in my thoughts I would do the step 6 before step 
4. This said, am I missing something?

Moacir


From: Kasturi Narra <kna...@redhat.com>
Sent: Wednesday, August 9, 2017 5:51 AM
To: Moacir Ferreira
Cc: Erekle Magradze; users@ovirt.org
Subject: Re: [ovirt-users] How to shutdown an oVirt cluster with Gluster and 
hosted engine

Hi,
   You can follow the steps below to do that.

1) Stop all the virtual machines.

2) Move all the storage domains other than hosted_storage to maintenance which 
will unmount them from all the nodes.

3)  Move HE to global maintenance 'hosted-engine --set-maintenance --mode 
=global'

4) stop HE vm by running the command 'hosted-engine --vm-shutdown'

5) confirm that engine is down using the command 'hosted-engine --vm-status'

6) stop ha agent and broker services on all the nodes by running the command 
'systemctl stop ovirt-ha-broker' ; 'systemctl stop ovirt-ha-agent'

7) umount hosted-engine from all the hypervisors 'hosted-engine 
--disconnect-storage'

8) stop all the volumes.

9) power off all the hypervisors.


To bring it up back again below steps will help.


1) Power on all the hypervisors.

2) start all the volumes

3) start ha agent and broker services on all the nodes by running the command 
'systemctl start ovirt-ha-broker' ; 'systemctl start ovirt-ha-agent'

4) Move hosted-engine out of global maintenance by running the command 
hosted-engine --set-maintenance --mode =none

5) give some time for the HE to come up. check for hosted-engine --vm-status to 
see if HE vm is up.

6) Activate all storage domains from UI.

7) start all virtual machines.

Hope this helps !!!

Thanks

kasturi.

On Tue, Aug 8, 2017 at 2:27 AM, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> wrote:

Sorry Erekle, I am just a beginner...


>From the hosted engine I can put the two other servers, that are not hosting 
>the hosted-engine, on maintenance, and that was what I did. When I tried to 
>put the last one on maintenance it did not allow me due to the hosted-engine 
>and I force it shunting down from the ssh CLI.


So, what should I do? My guess is that I should ssh to the hosted engine and 
shut it down. As it would not have another node to re-launch itself, then it 
would stay down. After this I should shutdown the oVirt node. Is it?


Anyway, I made a mistake and I forced it. When I try to bring the cluster back, 
I noticed that glusterd was not enabled to start when the node power-on. As I 
am a beginner, I "think" that there must be a reason to not start glusterd when 
the node comes up. So I started glusterd on the arbitrator (3rd server), then 
on the second node and finally on the host that was hosting the hosted-engine. 
It worked, but when I try to change the maintenance mode on the two nodes (2nd 
and 3rd) back to normal, the hosted-engine went down and I had to start it 
manually.


All this said, I would like to know how to bring the cluster down and how to 
bring it back on in the "right way" so I don't get problems. And yes, no VM is 
running but the hosted-engine.


Thanks for sharing your knowledge.

Moacir



From: Erekle Magradze 
<erekle.magra...@recogizer.de<mailto:erekle.magra...@recogizer.de>>
Sent: Monday, August 7, 2017 9:12 PM
To: Moacir Ferreira; users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] How to shutdown an oVirt cluster with Gluster and 
hosted engine


Hi Moacir,

First switch off all VMs.

Second you need to declare hosts maintenance mode, don't start with SRM (of 
course if you are able use the ovirt-engine), it will ask you to shutdown the 
glusterfs on a machine.

Third if all machines are in maintenance mode, you can start shutting down them.


If you have hosted engine setup follow this [1]


Cheers

Erekle


[1] 
https://github.com/rharmonson/richtech/wiki/OSVDC-Series:-oVirt-3.6-Cluster-Shutdown-and-Startup

[https://avatars1.githubusercontent.com/u/7966126?v=4=400]<https://github.com/rharmonson/richtech/wiki/OSVDC-Series:-oVirt-3.6-Cluster-Shutdown-and-Startup>

OSVDC Series: oVirt 3.6 Cluster Shutdown and Startup 
...<https://github.com/rharmonson/richtech/wiki/OSVDC-Series:-oVirt-3.6-Cluster-Shutdown-and-Startup>
github.com<http://github.com>
Contribute to richtech development by creating an account on GitHub.


On 08/07/2017 08:58 PM, Moacir Ferreira wrote:

I have installed a oVirt cluster in a KVM virtualized test environment. Now, 
how do I properly shutdown the oVirt cluster, with Gluster and the hosted 
engine?

I.e.: I want to install a cluster

Re: [ovirt-users] How to shutdown an oVirt cluster with Gluster and hosted engine

2017-08-08 Thread Moacir Ferreira
Sorry Erekle, I am just a beginner...


>From the hosted engine I can put the two other servers, that are not hosting 
>the hosted-engine, on maintenance, and that was what I did. When I tried to 
>put the last one on maintenance it did not allow me due to the hosted-engine 
>and I force it shunting down from the ssh CLI.


So, what should I do? My guess is that I should ssh to the hosted engine and 
shut it down. As it would not have another node to re-launch itself, then it 
would stay down. After this I should shutdown the oVirt node. Is it?


Anyway, I made a mistake and I forced it. When I try to bring the cluster back, 
I noticed that glusterd was not enabled to start when the node power-on. As I 
am a beginner, I "think" that there must be a reason to not start glusterd when 
the node comes up. So I started glusterd on the arbitrator (3rd server), then 
on the second node and finally on the host that was hosting the hosted-engine. 
It worked, but when I try to change the maintenance mode on the two nodes (2nd 
and 3rd) back to normal, the hosted-engine went down and I had to start it 
manually.


All this said, I would like to know how to bring the cluster down and how to 
bring it back on in the "right way" so I don't get problems. And yes, no VM is 
running but the hosted-engine.


Thanks for sharing your knowledge.

Moacir



From: Erekle Magradze <erekle.magra...@recogizer.de>
Sent: Monday, August 7, 2017 9:12 PM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] How to shutdown an oVirt cluster with Gluster and 
hosted engine


Hi Moacir,

First switch off all VMs.

Second you need to declare hosts maintenance mode, don't start with SRM (of 
course if you are able use the ovirt-engine), it will ask you to shutdown the 
glusterfs on a machine.

Third if all machines are in maintenance mode, you can start shutting down them.


If you have hosted engine setup follow this [1]


Cheers

Erekle


[1] 
https://github.com/rharmonson/richtech/wiki/OSVDC-Series:-oVirt-3.6-Cluster-Shutdown-and-Startup

[https://avatars1.githubusercontent.com/u/7966126?v=4=400]<https://github.com/rharmonson/richtech/wiki/OSVDC-Series:-oVirt-3.6-Cluster-Shutdown-and-Startup>

OSVDC Series: oVirt 3.6 Cluster Shutdown and Startup 
...<https://github.com/rharmonson/richtech/wiki/OSVDC-Series:-oVirt-3.6-Cluster-Shutdown-and-Startup>
github.com
Contribute to richtech development by creating an account on GitHub.


On 08/07/2017 08:58 PM, Moacir Ferreira wrote:

I have installed a oVirt cluster in a KVM virtualized test environment. Now, 
how do I properly shutdown the oVirt cluster, with Gluster and the hosted 
engine?

I.e.: I want to install a cluster of 3 servers and then send it to a remote 
office. How do I do it properly? I noticed that glusterd is not enabled to 
start automatically. And how do I deal with the hosted engine?


Thanks,

Moacir



___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users



--
Recogizer Group GmbH

Dr.rer.nat. Erekle Magradze
Lead Big Data Engineering & DevOps
Rheinwerkallee 2, 53227 Bonn
Tel: +49 228 29974555

E-Mail erekle.magra...@recogizer.de<mailto:erekle.magra...@recogizer.de>
Web: www.recogizer.com<http://www.recogizer.com>

Recogizer auf LinkedIn https://www.linkedin.com/company-beta/10039182/
Folgen Sie uns auf Twitter https://twitter.com/recogizer

-
Recogizer Group GmbH
Geschäftsführer: Oliver Habisch, Carsten Kreutze
Handelsregister: Amtsgericht Bonn HRB 20724
Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben,
informieren Sie bitte sofort den Absender und löschen Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der 
darin enthaltenen Informationen ist nicht gestattet.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Thanks Johan, you brought "light" into my darkness! I went looking for the 
GlusterFS tiering how-to and it looks like quite simple to attach a SSD as hot 
tier. For those willing to read about it, go here: 
http://blog.gluster.org/2016/03/automated-tiering-in-gluster/


Now, I still have a question: VMs are made of very large .qcow2 files. My 
understanding is that files in Gluster are kept all together in a single brick. 
If so, I will not benefit from tiering as a single SSD will not be big enough 
to fit all my large VM .qcow2 files. This would not be true if Gluster can 
store "blocks" of data that compose a large file spread on several bricks. But 
if I am not wrong, this is one of key differences in between GlusterFS and 
Ceph. Can you comment?


Moacir



From: Johan Bernhardsson <jo...@kafit.se>
Sent: Tuesday, August 8, 2017 7:03 AM
To: Moacir Ferreira; Devin Acosta; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


You attach the ssd as a hot tier with a gluster command. I don't think that 
gdeploy or ovirt gui can do it.

The gluster docs and redhat docs explains tiering quite good.

/Johan

On August 8, 2017 07:06:42 Moacir Ferreira <moacirferre...@hotmail.com> wrote:

Hi Devin,


Please consider that for the OS I have a RAID 1. Now, lets say I use RAID 5 to 
assemble a single disk on each server. In this case, the SSD will not make any 
difference, right? I guess that to be possible to use it, the SSD should not be 
part of the RAID 5. In this case I could create a logical volume made of the 
RAIDed brick and then extend it using the SSD. I.e.: Using gdeploy:


[disktype]

jbod



[pv1]

action=create

devices=sdb, sdc

wipefs=yes

ignore_vg_erros=no


[vg1]

action=create

vgname=gluster_vg_jbod

pvname=sdb

ignore_vg_erros=no


[vg2]

action=extend

vgname=gluster_vg_jbod

pvname=sdc

ignore_vg_erros=no


But will Gluster be able to auto-detect and use this SSD brick for tiering? Do 
I have to do some other configurations? Also, as the VM files (.qcow2) are 
quite big will I benefit from tiering? This is wrong and my approach should be 
other?


Thanks,

Moacir



From: Devin Acosta <de...@pabstatencio.com>
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for several 
different companies, and have dealt with the Red Hat Support Team in depth 
about optimal configuration in regards to setting up GlusterFS most efficiently 
and I wanted to share with you what I learned.

In general Red Hat Virtualization team frowns upon using each DISK of the 
system as just a JBOD, sure there is some protection by having the data 
replicated, however, the recommendation is to use RAID 6 (preferred) or RAID-5, 
or at least RAID-1 at the very least.

Here is the direct quote from Red Hat when I asked about RAID and Bricks:

"A typical Gluster configuration would use RAID underneath the bricks. RAID 6 
is most typical as it gives you 2 disk failure protection, but RAID 5 could be 
used too. Once you have the RAIDed bricks, you'd then apply the desired 
replication on top of that. The most popular way of doing this would be 
distributed replicated with 2x replication. In general you'll get better 
performance with larger bricks. 12 drives is often a sweet spot. Another option 
would be to create a separate tier using all SSD’s.”

In order to SSD tiering from my understanding you would need 1 x NVMe drive in 
each server, or 4 x SSD hot tier (it needs to be distributed, replicated for 
the hot tier if not using NVME). So with you only having 1 SSD drive in each 
server, I’d suggest maybe looking into the NVME option.

Since your using only 3-servers, what I’d probably suggest is to do (2 Replicas 
+ Arbiter Node), this setup actually doesn’t require the 3rd server to have big 
drives at all as it only stores meta-data about the files and not actually a 
full copy.

Please see the attached document that was given to me by Red Hat to get more 
information on this. Hope this information helps you.


--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect


On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>) wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I depl

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
Exactly Fabrice! In this case the router will fragment the "bigger" MTU to fit 
the "smaller" MTU but only when the DF is not set. However, fragmentation on 
routers are made by the control plane, meaning you will overload the router CPU 
doing too much fragmentation. On a good NIC the announced MTU to the IP stack 
is very big (like 64Kb) because the off-load engine will fragment this very 
large MTU and send it. But on this kind of NIC the fragmentation is done by 
dedicated AISCs that does not require any CPU intervention to do it. Just give 
it a try... Assemble a lab using Linux and you will see what I am trying to 
explain.

Moacir


From: Fabrice Bacchella <fabrice.bacche...@orange.fr>
Sent: Tuesday, August 8, 2017 2:50 PM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Users Digest, Vol 71, Issue 37


Le 8 août 2017 à 14:53, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> a écrit :

But if you receive a 9000 MTU frame on an "input" interface that results 
sending it out on an interface of a 1500 MTU, then if you set DF bit the frame 
will just be dropped by the router.

The frame will be dropped and the router will send an ICMP message "packet to 
big" to the sender, it's network stack will received that, learn that the PMTU 
is lower and try with smaller fragment, see 
https://en.wikipedia.org/wiki/Path_MTU_Discovery.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
But if you receive a 9000 MTU frame on an "input" interface that results 
sending it out on an interface of a 1500 MTU, then if you set DF bit the frame 
will just be dropped by the router. If you want your data to "cross" your frame 
over a different MTU path, then you can not set DF to 1. This is a quite simple 
and easy thing to demonstrate. Just create a simple virtual lab with 3 Linux 
doing routing and test it. So, if your goal is to communicate over paths that 
may have a MTU lower than 9000 you better make sure your server sends out a 
frame that the path can support.

Moacir


From: Fabrice Bacchella <fabrice.bacche...@orange.fr>
Sent: Tuesday, August 8, 2017 1:37 PM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Users Digest, Vol 71, Issue 37

The border router will do like any other router on the world. If the DF bit is 
set (common case) or if it's IPv6, it will not fragment but send an ICMP.

Le 8 août 2017 à 13:34, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> a écrit :

True! But in some point of the network it may be necessary to make the MTU 
1500. For example, if your data need to cross the Internet. The border router 
in between your LAN and the Internet will have to fragment a large frame back 
to a normal one to send it over the Internet. This router will just "die" if 
you have a heavy load.

Moacir


From: Fabrice Bacchella 
<fabrice.bacche...@orange.fr<mailto:fabrice.bacche...@orange.fr>>
Sent: Tuesday, August 8, 2017 12:23 PM
To: Moacir Ferreira
Cc: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] Users Digest, Vol 71, Issue 37


Le 8 août 2017 à 11:49, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> a écrit :

This is by far more complex. A good NIC will have an offload engine (LSO - 
Large Segment Offload) and, if so, the NIC driver will report a MTU of 64K to 
the IP stack. The IP stack will then send data to the NIC as if the MTU were 
64K and the NIC will fragment it to the size of the "declared" MTU on the 
interface so PMTUD will not be efficient in such scenario. If all this takes 
place in the server, then you get no problem. But if a standard router is 
configured to support 9K jumbo frame in one interface (i.e.: LAN connection) 
and 1500 in another (i.e.: WAN connection) then the router will be responsible 
for the fragmentation.

That's happen only if the bit don't fragment is not set, otherwise router are 
not allowed to do that and send back a "packet to big" ICMP, it's called path 
mtu discovery. To my knowledge, it's usually set, and even mandatory on IPv6.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
Sorry... I guess our discussion here is in line with the "good practices" 
discussion. For a long time I see a lot of mentions on having a front-end and a 
back-end network when dealing with distributed file systems like Gluster and 
Ceph. What I would like to here from those who already implemented oVirt on the 
field is "what is the real life" approach for good performance as large 
file/memory transfers will strongly benefit from having a big MTU. However, big 
MTU must be driven correctly otherwise you may end-up having the problem we are 
discussing.

Moacir

________
From: Moacir Ferreira <moacirferre...@hotmail.com>
Sent: Tuesday, August 8, 2017 3:16 PM
To: Fabrice Bacchella
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Users Digest, Vol 71, Issue 37


Exactly Fabrice! In this case the router will fragment the "bigger" MTU to fit 
the "smaller" MTU but only when the DF is not set. However, fragmentation on 
routers are made by the control plane, meaning you will overload the router CPU 
doing too much fragmentation. On a good NIC the announced MTU to the IP stack 
is very big (like 64Kb) because the off-load engine will fragment this very 
large MTU and send it. But on this kind of NIC the fragmentation is done by 
dedicated AISCs that does not require any CPU intervention to do it. Just give 
it a try... Assemble a lab using Linux and you will see what I am trying to 
explain.

Moacir


From: Fabrice Bacchella <fabrice.bacche...@orange.fr>
Sent: Tuesday, August 8, 2017 2:50 PM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Users Digest, Vol 71, Issue 37


Le 8 août 2017 à 14:53, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> a écrit :

But if you receive a 9000 MTU frame on an "input" interface that results 
sending it out on an interface of a 1500 MTU, then if you set DF bit the frame 
will just be dropped by the router.

The frame will be dropped and the router will send an ICMP message "packet to 
big" to the sender, it's network stack will received that, learn that the PMTU 
is lower and try with smaller fragment, see 
https://en.wikipedia.org/wiki/Path_MTU_Discovery.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Thanks once again Johan!


What would be your approach: JBOD straight or JBOD made of RAIDed bricks?


Moacir


From: Johan Bernhardsson <jo...@kafit.se>
Sent: Tuesday, August 8, 2017 11:24 AM
To: Moacir Ferreira; Devin Acosta; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


On ovirt gluster uses sharding. So all large files are broken up in small 
pieces on the gluster bricks.

/Johan

On August 8, 2017 12:19:39 Moacir Ferreira <moacirferre...@hotmail.com> wrote:

Thanks Johan, you brought "light" into my darkness! I went looking for the 
GlusterFS tiering how-to and it looks like quite simple to attach a SSD as hot 
tier. For those willing to read about it, go here: 
http://blog.gluster.org/2016/03/automated-tiering-in-gluster/


Now, I still have a question: VMs are made of very large .qcow2 files. My 
understanding is that files in Gluster are kept all together in a single brick. 
If so, I will not benefit from tiering as a single SSD will not be big enough 
to fit all my large VM .qcow2 files. This would not be true if Gluster can 
store "blocks" of data that compose a large file spread on several bricks. But 
if I am not wrong, this is one of key differences in between GlusterFS and 
Ceph. Can you comment?


Moacir



From: Johan Bernhardsson <jo...@kafit.se>
Sent: Tuesday, August 8, 2017 7:03 AM
To: Moacir Ferreira; Devin Acosta; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


You attach the ssd as a hot tier with a gluster command. I don't think that 
gdeploy or ovirt gui can do it.

The gluster docs and redhat docs explains tiering quite good.

/Johan

On August 8, 2017 07:06:42 Moacir Ferreira <moacirferre...@hotmail.com> wrote:

Hi Devin,


Please consider that for the OS I have a RAID 1. Now, lets say I use RAID 5 to 
assemble a single disk on each server. In this case, the SSD will not make any 
difference, right? I guess that to be possible to use it, the SSD should not be 
part of the RAID 5. In this case I could create a logical volume made of the 
RAIDed brick and then extend it using the SSD. I.e.: Using gdeploy:


[disktype]

jbod



[pv1]

action=create

devices=sdb, sdc

wipefs=yes

ignore_vg_erros=no


[vg1]

action=create

vgname=gluster_vg_jbod

pvname=sdb

ignore_vg_erros=no


[vg2]

action=extend

vgname=gluster_vg_jbod

pvname=sdc

ignore_vg_erros=no


But will Gluster be able to auto-detect and use this SSD brick for tiering? Do 
I have to do some other configurations? Also, as the VM files (.qcow2) are 
quite big will I benefit from tiering? This is wrong and my approach should be 
other?


Thanks,

Moacir



From: Devin Acosta <de...@pabstatencio.com>
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for several 
different companies, and have dealt with the Red Hat Support Team in depth 
about optimal configuration in regards to setting up GlusterFS most efficiently 
and I wanted to share with you what I learned.

In general Red Hat Virtualization team frowns upon using each DISK of the 
system as just a JBOD, sure there is some protection by having the data 
replicated, however, the recommendation is to use RAID 6 (preferred) or RAID-5, 
or at least RAID-1 at the very least.

Here is the direct quote from Red Hat when I asked about RAID and Bricks:

"A typical Gluster configuration would use RAID underneath the bricks. RAID 6 
is most typical as it gives you 2 disk failure protection, but RAID 5 could be 
used too. Once you have the RAIDed bricks, you'd then apply the desired 
replication on top of that. The most popular way of doing this would be 
distributed replicated with 2x replication. In general you'll get better 
performance with larger bricks. 12 drives is often a sweet spot. Another option 
would be to create a separate tier using all SSD’s.”

In order to SSD tiering from my understanding you would need 1 x NVMe drive in 
each server, or 4 x SSD hot tier (it needs to be distributed, replicated for 
the hot tier if not using NVME). So with you only having 1 SSD drive in each 
server, I’d suggest maybe looking into the NVME option.

Since your using only 3-servers, what I’d probably suggest is to do (2 Replicas 
+ Arbiter Node), this setup actually doesn’t require the 3rd server to have big 
drives at all as it only stores meta-data about the files and not actually a 
full copy.

Please see the attached document that was given to me by Red Hat to get more 
information on this. Hope this information helps you.


--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect


On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>) wrote:

I am willing to assemble a oVirt "pod", made o

Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Fernando,


Let's see what people say... But this is what I understood Red Hat says is the 
best performance model. This is the main reason to open this discussion because 
as long as I can see, some of you in the community, do not agree.


But when I think about a "distributed file system", that can make any number of 
copies you want, it does not make sense using a RAIDed brick, what it makes 
sense is to use JBOD.


Moacir


From: fernando.fredi...@upx.com.br <fernando.fredi...@upx.com.br> on behalf of 
FERNANDO FREDIANI <fernando.fredi...@upx.com>
Sent: Tuesday, August 8, 2017 3:08 AM
To: Moacir Ferreira
Cc: Colin Coe; users@ovirt.org
Subject: Re: [ovirt-users] Good practices

Moacir, I understand that if you do this type of configuration you will be 
severely impacted on storage performance, specially for writes. Even if you 
have a Hardware RAID Controller with Writeback cache you will have a 
significant performance penalty and may not fully use all the resources you 
mentioned you have.

Fernando

2017-08-07 10:03 GMT-03:00 Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>>:

Hi Colin,


Take a look on Devin's response. Also, read the doc he shared that gives some 
hints on how to deploy Gluster.


It is more like that if you want high-performance you should have the bricks 
created as RAID (5 or 6) by the server's disk controller and them assemble a 
JBOD GlusterFS. The attached document is Gluster specific and not for oVirt. 
But at this point I think that having SSD will not be a plus as using the RAID 
controller Gluster will not be aware of the SSD. Regarding the OS, my idea is 
to have a RAID 1, made of 2 low cost HDDs, to install it.


So far, based on the information received I should create a single RAID 5 or 6 
on each server and then use this disk as a brick to create my Gluster cluster, 
made of 2 replicas + 1 arbiter. What is new for me is the detail that the 
arbiter does not need a lot of space as it only keeps meta data.


Thanks for your response!

Moacir


From: Colin Coe <colin@gmail.com<mailto:colin@gmail.com>>
Sent: Monday, August 7, 2017 12:41 PM

To: Moacir Ferreira
Cc: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] Good practices

Hi

I just thought that you'd do hardware RAID if you had the controller or JBOD if 
you didn't.  In hindsight, a server with 40Gbps NICs is pretty likely to have a 
hardware RAID controller.  I've never done JBOD with hardware RAID.  I think 
having a single gluster brick on hardware JBOD would be riskier than multiple 
bricks, each on a single disk, but thats not based on anything other than my 
prejudices.

I thought gluster tiering was for the most frequently accessed files, in which 
case all the VMs disks would end up in the hot tier.  However, I have been 
wrong before...

I just wanted to know where the OS was going as I didn't see it mentioned in 
the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a lot of 
wasted disk.

Honestly, I think Yaniv's answer was far better than my own and made the 
important point about having an arbiter.

Thanks

On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> wrote:

Hi Colin,


I am in Portugal, so sorry for this late response. It is quite confusing for 
me, please consider:

1 - What if the RAID is done by the server's disk controller, not by software?


2 - For JBOD I am just using gdeploy to deploy it. However, I am not using the 
oVirt node GUI to do this.


3 - As the VM .qcow2 files are quite big, tiering would only help if made by an 
intelligent system that uses SSD for chunks of data not for the entire .qcow2 
file. But I guess this is a problem everybody else has. So, Do you know how 
tiering works in Gluster?


4 - I am putting the OS on the first disk. However, would you do differently?


Moacir


From: Colin Coe <colin@gmail.com<mailto:colin....@gmail.com>>
Sent: Monday, August 7, 2017 4:48 AM
To: Moacir Ferreira
Cc: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] Good practices

1) RAID5 may be a performance hit-

2) I'd be inclined to do this as JBOD by creating a distributed disperse volume 
on each server.  Something like

echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e 
"server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)

3) I think the above.

4) Gluster does support tiering, but IIRC you'd need the same number of SSD as 
spindle drives.  There may be another way to use the SSD as a fast cache.

Where are you putting the OS?

Hope I understood the question...

Thanks

On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira 
<moacirferre...@hotma

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
True! But in some point of the network it may be necessary to make the MTU 
1500. For example, if your data need to cross the Internet. The border router 
in between your LAN and the Internet will have to fragment a large frame back 
to a normal one to send it over the Internet. This router will just "die" if 
you have a heavy load.

Moacir


From: Fabrice Bacchella <fabrice.bacche...@orange.fr>
Sent: Tuesday, August 8, 2017 12:23 PM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Users Digest, Vol 71, Issue 37


Le 8 août 2017 à 11:49, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> a écrit :

This is by far more complex. A good NIC will have an offload engine (LSO - 
Large Segment Offload) and, if so, the NIC driver will report a MTU of 64K to 
the IP stack. The IP stack will then send data to the NIC as if the MTU were 
64K and the NIC will fragment it to the size of the "declared" MTU on the 
interface so PMTUD will not be efficient in such scenario. If all this takes 
place in the server, then you get no problem. But if a standard router is 
configured to support 9K jumbo frame in one interface (i.e.: LAN connection) 
and 1500 in another (i.e.: WAN connection) then the router will be responsible 
for the fragmentation.

That's happen only if the bit don't fragment is not set, otherwise router are 
not allowed to do that and send back a "packet to big" ICMP, it's called path 
mtu discovery. To my knowledge, it's usually set, and even mandatory on IPv6.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Ok, the 40Gb NIC that I got were for free. But anyway, if you were working with 
6 HDD + 1 SSD per server, then you get 21 disks on your cluster. As data in a 
JBOD will be built all over the network, then it can be really intensive 
especially depending on the number of replicas you choose for your needs. Also, 
when moving a VM alive you must transfer the memory contents of a VM to another 
node (just think about moving a VM with 32GB RAM). All together, it can be a 
quite large chunk of data moving over the network all the time. While 40Gb NIC 
is not a "must", I think it is more affordable as it cost much less then a good 
disk controller.


But my confusion is that, as said by other fellows, the best "performance 
model" is when you use a hardware RAIDed brick (i.e.: 5 or 6) to assemble your 
GlusterFS. In this case, as I would have to buy a good controller but have less 
network traffic, to lower the cost I would then use a separate network made of 
10Gb NICs plus the controller.


Moacir



>
> > Le 8 ao?t 2017 ? 04:08, FERNANDO FREDIANI  a
> ?crit :
>
> > Even if you have a Hardware RAID Controller with Writeback cache you
> will have a significant performance penalty and may not fully use all the
> resources you mentioned you have.
> >
>
> Nope again,from my experience with HP Smart Array and write back cache,
> write, that goes in the cache, are even faster that read that must goes to
> the disks. of course if the write are too fast and to big, they will over
> overflow the cache. But on todays controller they are multi-gigabyte cache,
> you must write a lot to fill them. And if you can afford 40Gb card, you can
> afford decent controller.
>

The last sentence raises an excellent point: balance your resources. Don't
spend a fortune on one component while another will end up being your
bottleneck.
Storage is usually the slowest link in the chain. I personally believe that
spending the money on NVMe drives makes more sense than 40Gb (except [1],
which is suspiciously cheap!)

Y.
[1] http://a.co/4hsCTqG

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-08 Thread Moacir Ferreira
This is by far more complex. A good NIC will have an offload engine (LSO - 
Large Segment Offload) and, if so, the NIC driver will report a MTU of 64K to 
the IP stack. The IP stack will then send data to the NIC as if the MTU were 
64K and the NIC will fragment it to the size of the "declared" MTU on the 
interface so PMTUD will not be efficient in such scenario. If all this takes 
place in the server, then you get no problem. But if a standard router is 
configured to support 9K jumbo frame in one interface (i.e.: LAN connection) 
and 1500 in another (i.e.: WAN connection) then the router will be responsible 
for the fragmentation. However, most of the routers out there are not able to 
deal with this in high traffic demands.


Splitting the very intensive east/west traffic like disk copies, VM moves, etc. 
from the "service" traffic will not only prevent contention but also fix this 
problem with MTU.


Moacir


From: Yaniv Kaul <yk...@redhat.com>
Sent: Tuesday, August 8, 2017 7:35 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Users Digest, Vol 71, Issue 37



On Tue, Aug 8, 2017 at 12:42 AM, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> wrote:

Fabrice,


If you choose to have jumbo frames all over, then when the traffic goes outside 
of your "jumbo frames" enabled network it will be necessary to be fragmented 
back again to the destination MTU. Most of the datacenters will provide 
services to the outside world where the MTU is 1500 bytes. In this case, you 
will slow down your performance because your router will be doing the 
fragmentation. So I would always use jumbo frames in the datacenter for 
east/west traffic and standard (1500 bytes) for north/south traffic.

I doubt this would happen with modern TCP/IP stacks, for TCP connections. It'll 
adjust to the path most likely, using PMTUD. Of course, this does not always 
work (depends on HW en-route).
UDP packets might fail miserably too (dropped), depending on the HW en-route, 
but UDP traffic (and specifically large packets) are not that common these days.

Nevertheless, I don't see a huge advantage in enabling this for north-south 
traffic, TBH, and the mysterious, random traffic drop issues it may cause is 
not worth it.
Y.


Moacir

--

Message: 1
Date: Mon, 7 Aug 2017 21:50:36 +0200
From: Fabrice Bacchella 
<fabrice.bacche...@orange.fr<mailto:fabrice.bacche...@orange.fr>>
To: FERNANDO FREDIANI 
<fernando.fredi...@upx.com<mailto:fernando.fredi...@upx.com>>
Cc: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] Good practices
Message-ID: 
<4365e3f7-4c77-4ff5-8401-1cda2f002...@orange.fr<mailto:4365e3f7-4c77-4ff5-8401-1cda2f002...@orange.fr>>
Content-Type: text/plain; charset="windows-1252"

>> Moacir: Yes! This is another reason to have separate networks for 
>> north/south and east/west. In that way I can use the standard MTU on the 
>> 10Gb NICs and jumbo frames on the file/move 40Gb NICs.

Why not Jumbo frame every where ?
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.ovirt.org/pipermail/users/attachments/20170807/4ba55f08/attachment-0001.html>

--

Message: 2
Date: Mon, 7 Aug 2017 16:52:40 -0300
From: FERNANDO FREDIANI 
<fernando.fredi...@upx.com<mailto:fernando.fredi...@upx.com>>
To: Fabrice Bacchella 
<fabrice.bacche...@orange.fr<mailto:fabrice.bacche...@orange.fr>>
Cc: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] Good practices
Message-ID: 
<40d044ae-a41d-082e-131a-bf5fb5503...@upx.com<mailto:40d044ae-a41d-082e-131a-bf5fb5503...@upx.com>>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

What you mentioned is a specific case and not a generic situation. The
main point there is that RAID 5 or 6 impacts write performance compared
when you write to only 2 given disks at a time. That was the comparison
made.

Fernando


On 07/08/2017 16:49, Fabrice Bacchella wrote:
>
>> Le 7 ao?t 2017 ? 17:41, FERNANDO FREDIANI 
>> <fernando.fredi...@upx.com<mailto:fernando.fredi...@upx.com>
>> <mailto:fernando.fredi...@upx.com>> a ?crit :
>>
>
>> Yet another downside of having a RAID (specially RAID 5 or 6) is that
>> it reduces considerably the write speeds as each group of disks will
>> end up having the write speed of a single disk as all other disks of
>> that group have to wait for each other to write as well.
>>
>
> That's not true if you have medium to high range hardware raid. For
> example, HP Smart Array come with a flash cache of about 1 or 2 Gb
> that hides that from the OS.

--

Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi Devin,


Please consider that for the OS I have a RAID 1. Now, lets say I use RAID 5 to 
assemble a single disk on each server. In this case, the SSD will not make any 
difference, right? I guess that to be possible to use it, the SSD should not be 
part of the RAID 5. In this case I could create a logical volume made of the 
RAIDed brick and then extend it using the SSD. I.e.: Using gdeploy:


[disktype]

jbod



[pv1]

action=create

devices=sdb, sdc

wipefs=yes

ignore_vg_erros=no


[vg1]

action=create

vgname=gluster_vg_jbod

pvname=sdb

ignore_vg_erros=no


[vg2]

action=extend

vgname=gluster_vg_jbod

pvname=sdc

ignore_vg_erros=no


But will Gluster be able to auto-detect and use this SSD brick for tiering? Do 
I have to do some other configurations? Also, as the VM files (.qcow2) are 
quite big will I benefit from tiering? This is wrong and my approach should be 
other?


Thanks,

Moacir



From: Devin Acosta <de...@pabstatencio.com>
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for several 
different companies, and have dealt with the Red Hat Support Team in depth 
about optimal configuration in regards to setting up GlusterFS most efficiently 
and I wanted to share with you what I learned.

In general Red Hat Virtualization team frowns upon using each DISK of the 
system as just a JBOD, sure there is some protection by having the data 
replicated, however, the recommendation is to use RAID 6 (preferred) or RAID-5, 
or at least RAID-1 at the very least.

Here is the direct quote from Red Hat when I asked about RAID and Bricks:

"A typical Gluster configuration would use RAID underneath the bricks. RAID 6 
is most typical as it gives you 2 disk failure protection, but RAID 5 could be 
used too. Once you have the RAIDed bricks, you'd then apply the desired 
replication on top of that. The most popular way of doing this would be 
distributed replicated with 2x replication. In general you'll get better 
performance with larger bricks. 12 drives is often a sweet spot. Another option 
would be to create a separate tier using all SSD’s.”

In order to SSD tiering from my understanding you would need 1 x NVMe drive in 
each server, or 4 x SSD hot tier (it needs to be distributed, replicated for 
the hot tier if not using NVME). So with you only having 1 SSD drive in each 
server, I’d suggest maybe looking into the NVME option.

Since your using only 3-servers, what I’d probably suggest is to do (2 Replicas 
+ Arbiter Node), this setup actually doesn’t require the 3rd server to have big 
drives at all as it only stores meta-data about the files and not actually a 
full copy.

Please see the attached document that was given to me by Red Hat to get more 
information on this. Hope this information helps you.


--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect


On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>) wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?


You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi Colin,


Take a look on Devin's response. Also, read the doc he shared that gives some 
hints on how to deploy Gluster.


It is more like that if you want high-performance you should have the bricks 
created as RAID (5 or 6) by the server's disk controller and them assemble a 
JBOD GlusterFS. The attached document is Gluster specific and not for oVirt. 
But at this point I think that having SSD will not be a plus as using the RAID 
controller Gluster will not be aware of the SSD. Regarding the OS, my idea is 
to have a RAID 1, made of 2 low cost HDDs, to install it.


So far, based on the information received I should create a single RAID 5 or 6 
on each server and then use this disk as a brick to create my Gluster cluster, 
made of 2 replicas + 1 arbiter. What is new for me is the detail that the 
arbiter does not need a lot of space as it only keeps meta data.


Thanks for your response!

Moacir


From: Colin Coe <colin@gmail.com>
Sent: Monday, August 7, 2017 12:41 PM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices

Hi

I just thought that you'd do hardware RAID if you had the controller or JBOD if 
you didn't.  In hindsight, a server with 40Gbps NICs is pretty likely to have a 
hardware RAID controller.  I've never done JBOD with hardware RAID.  I think 
having a single gluster brick on hardware JBOD would be riskier than multiple 
bricks, each on a single disk, but thats not based on anything other than my 
prejudices.

I thought gluster tiering was for the most frequently accessed files, in which 
case all the VMs disks would end up in the hot tier.  However, I have been 
wrong before...

I just wanted to know where the OS was going as I didn't see it mentioned in 
the OP.  Normally, I'd have the OS on a RAID1 but in your case thats a lot of 
wasted disk.

Honestly, I think Yaniv's answer was far better than my own and made the 
important point about having an arbiter.

Thanks

On Mon, Aug 7, 2017 at 5:56 PM, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> wrote:

Hi Colin,


I am in Portugal, so sorry for this late response. It is quite confusing for 
me, please consider:

1 - What if the RAID is done by the server's disk controller, not by software?


2 - For JBOD I am just using gdeploy to deploy it. However, I am not using the 
oVirt node GUI to do this.


3 - As the VM .qcow2 files are quite big, tiering would only help if made by an 
intelligent system that uses SSD for chunks of data not for the entire .qcow2 
file. But I guess this is a problem everybody else has. So, Do you know how 
tiering works in Gluster?


4 - I am putting the OS on the first disk. However, would you do differently?


Moacir


From: Colin Coe <colin@gmail.com<mailto:colin@gmail.com>>
Sent: Monday, August 7, 2017 4:48 AM
To: Moacir Ferreira
Cc: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] Good practices

1) RAID5 may be a performance hit-

2) I'd be inclined to do this as JBOD by creating a distributed disperse volume 
on each server.  Something like

echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e 
"server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)

3) I think the above.

4) Gluster does support tiering, but IIRC you'd need the same number of SSD as 
spindle drives.  There may be another way to use the SSD as a fast cache.

Where are you putting the OS?

Hope I understood the question...

Thanks

On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for us

[ovirt-users] NTP

2017-08-07 Thread Moacir Ferreira
I found that NTP does not get installed on oVirt node on the latest version 
ovirt-node-ng-installer-ovirt-4.1-2017052309.iso.


Also the installed repositories does not have it. So, is this a bug or NTP is 
not considered appropriated anymore?


Thanks.

Moacir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Users Digest, Vol 71, Issue 32

2017-08-07 Thread Moacir Ferreira
Hi Fernando,


Since I have 3 servers then each one will have a connection to the other two. 
In this case I could setup 3 subnets, one for each link, avoiding in this way 
to implement bridging (and Spanning Tree) on the servers. My idea is that when 
I build the server I should use the IP addresses of the 40Gb NICs to setup 
Gluster and everything should to be ok. Anyway, I will test it on a virtual 
environment before installing.

Thanks,
Moacir


--

Message: 1
Date: Mon, 7 Aug 2017 10:08:32 -0300
From: FERNANDO FREDIANI <fernando.fredi...@upx.com>
To: users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Message-ID: <8dd168a0-d5ab-bdf4-5c5d-197909afc...@upx.com>
Content-Type: text/plain; charset="windows-1252"; Format="flowed"

Moacir, I beleive for using the 3 servers directly connected to each
others without switch you have to have a Bridge on each server for every
2 physical interfaces to allow the traffic passthrough in layer2 (Is it
possible to create this from the oVirt Engine Web Interface?). If your
ovirtmgmt network is separate from other (should really be) that should
be fine to do.


Fernando


On 07/08/2017 07:13, Moacir Ferreira wrote:
>
> Hi, in-line responses.
>
>
> Thanks,
>
> Moacir
>
>
> 
> *From:* Yaniv Kaul <yk...@redhat.com>
> *Sent:* Monday, August 7, 2017 7:42 AM
> *To:* Moacir Ferreira
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
>
> On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira
> <moacirferre...@hotmail.com <mailto:moacirferre...@hotmail.com>> wrote:
>
> I am willing to assemble a oVirt "pod", made of 3 servers, each
> with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The
> idea is to use GlusterFS to provide HA for the VMs. The 3 servers
> have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to
> create a loop like a server triangle using the 40Gb NICs for
> virtualization files (VMs .qcow2) access and to move VMs around
> the pod (east /west traffic) while using the 10Gb interfaces for
> giving services to the outside world (north/south traffic).
>
>
> Very nice gear. How are you planning the network exactly? Without a
> switch, back-to-back? (sounds OK to me, just wanted to ensure this is
> what the 'dual' is used for). However, I'm unsure if you have the
> correct balance between the interface speeds (40g) and the disks (too
> many HDDs?).
>
> Moacir:The idea is to have a very high performance network for the
> distributed file system and to prevent bottlenecks when we move one VM
> from a node to another. Using 40Gb NICs I can just connect the servers
> back-to-back. In this case I don't need the expensive 40Gb switch, I
> get very high speed and no contention between north/south traffic with
> east/west.
>
>
> This said, my first question is: How should I deploy GlusterFS in
> such oVirt scenario? My questions are:
>
>
> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node,
> and then create a GlusterFS using them?
>
> I would assume RAID 1 for the operating system (you don't want a
> single point of failure there?) and the rest JBODs. The SSD will be
> used for caching, I reckon? (I personally would add more SSDs instead
> of HDDs, but it does depend on the disk sizes and your space requirements.
>
> Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic
> JBOD or a JBOD assembled using RAID-5 "disks" createdby the server's
> disk controller?
>
> 2 - Instead, should I create a JBOD array made of all server's disks?
>
> 3 - What is the best Gluster configuration to provide for HA while
> not consuming too much disk space?
>
>
> Replica 2 + Arbiter sounds good to me.
> Moacir:I agree, and that is what I am using.
>
> 4 - Does a oVirt hypervisor pod like I am planning to build, and
> the virtualization environment, benefits from tiering when using a
> SSD disk? And yes, will Gluster do it by default or I have to
> configure it to do so?
>
>
> Yes, I believe using lvmcache is the best way to go.
>
> Moacir: Are you sure? I say that because the qcow2 files will be
> quite big. So if tiering is "file based" the SSD would have to be
> very, very big unless Gluster tiering do it by "chunks of data".
>
>
> At the bottom line, what is the good practice for using GlusterFS
> in small pods for enterprises?
>
>
> Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5).
> Sharding (en

Re: [ovirt-users] Users Digest, Vol 71, Issue 37

2017-08-07 Thread Moacir Ferreira
distributed replicated
>>>> with 2x replication. In general you'll get better performance with
>>>> larger bricks. 12 drives is often a sweet spot. Another option
>>>> would be to create a separate tier using all SSD?s.? /
>>>>
>>>> /In order to SSD tiering from my understanding you would need 1 x
>>>> NVMe drive in each server, or 4 x SSD hot tier (it needs to be
>>>> distributed, replicated for the hot tier if not using NVME). So
>>>> with you only having 1 SSD drive in each server, I?d suggest maybe
>>>> looking into the NVME option. /
>>>> /
>>>> /
>>>> /Since your using only 3-servers, what I?d probably suggest is to
>>>> do (2 Replicas + Arbiter Node), this setup actually doesn?t require
>>>> the 3rd server to have big drives at all as it only stores
>>>> meta-data about the files and not actually a full copy. /
>>>> /
>>>> /
>>>> /Please see the attached document that was given to me by Red Hat
>>>> to get more information on this. Hope this information helps you./
>>>> /
>>>> /
>>>>
>>>> --
>>>>
>>>> Devin Acosta, RHCA, RHVCA
>>>> Red Hat Certified Architect
>>>>
>>>> On August 6, 2017 at 7:29:29 PM, Moacir Ferreira
>>>> (moacirferre...@hotmail.com <mailto:moacirferre...@hotmail.com>) wrote:
>>>>
>>>>> I am willing to assemble a oVirt "pod", made of 3 servers, each
>>>>> with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The
>>>>> idea is to use GlusterFS to provide HA for the VMs. The 3 servers
>>>>> have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to
>>>>> create a loop like a server triangle using the 40Gb NICs for
>>>>> virtualization files (VMs .qcow2) access and to move VMs around
>>>>> the pod (east /west traffic) while using the 10Gb interfaces for
>>>>> giving services to the outside world (north/south traffic).
>>>>>
>>>>>
>>>>> This said, my first question is: How should I deploy GlusterFS in
>>>>> such oVirt scenario? My questions are:
>>>>>
>>>>>
>>>>> 1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node,
>>>>> and then create a GlusterFS using them?
>>>>>
>>>>> 2 - Instead, should I create a JBOD array made of all server's disks?
>>>>>
>>>>> 3 - What is the best Gluster configuration to provide for HA while
>>>>> not consuming too much disk space?
>>>>>
>>>>> 4 - Does a oVirt hypervisor pod like I am planning to build, and
>>>>> the virtualization environment, benefits from tiering when using a
>>>>> SSD disk? And yes, will Gluster do it by default or I have to
>>>>> configure it to do so?
>>>>>
>>>>>
>>>>> At the bottom line, what is the good practice for using GlusterFS
>>>>> in small pods for enterprises?
>>>>>
>>>>>
>>>>> You opinion/feedback will be really appreciated!
>>>>>
>>>>> Moacir
>>>>>
>>>>> ___
>>>>> Users mailing list
>>>>> Users@ovirt.org <mailto:Users@ovirt.org>
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>

--
Recogizer Group GmbH

Dr.rer.nat. Erekle Magradze
Lead Big Data Engineering & DevOps
Rheinwerkallee 2, 53227 Bonn
Tel: +49 228 29974555

E-Mail erekle.magra...@recogizer.de
Web: www.recogizer.com<http://www.recogizer.com>

Recogizer auf LinkedIn https://www.linkedin.com/company-beta/10039182/
Folgen Sie uns auf Twitter https://twitter.com/recogizer

-
Recogizer Group GmbH
Gesch?ftsf?hrer: Oliver Habisch, Carsten Kreutze
Handelsregister: Amtsgericht Bonn HRB 20724
Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993

Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten 
haben,
informieren Sie bitte sofort den Absender und l?schen Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der 
darin enthaltenen Informationen ist nicht gestattet.

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.ovirt.org/pipermail/users/attachments/20170807/1a5c2ac2/attachment.html>

--

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


End of Users Digest, Vol 71, Issue 37
*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi Colin,


I am in Portugal, so sorry for this late response. It is quite confusing for 
me, please consider:

1 - What if the RAID is done by the server's disk controller, not by software?


2 - For JBOD I am just using gdeploy to deploy it. However, I am not using the 
oVirt node GUI to do this.


3 - As the VM .qcow2 files are quite big, tiering would only help if made by an 
intelligent system that uses SSD for chunks of data not for the entire .qcow2 
file. But I guess this is a problem everybody else has. So, Do you know how 
tiering works in Gluster?


4 - I am putting the OS on the first disk. However, would you do differently?


Moacir


From: Colin Coe <colin@gmail.com>
Sent: Monday, August 7, 2017 4:48 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices

1) RAID5 may be a performance hit-

2) I'd be inclined to do this as JBOD by creating a distributed disperse volume 
on each server.  Something like

echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e 
"server${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)

3) I think the above.

4) Gluster does support tiering, but IIRC you'd need the same number of SSD as 
spindle drives.  There may be another way to use the SSD as a fast cache.

Where are you putting the OS?

Hope I understood the question...

Thanks

On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?


You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to shutdown an oVirt cluster with Gluster and hosted engine

2017-08-07 Thread Moacir Ferreira
I have installed a oVirt cluster in a KVM virtualized test environment. Now, 
how do I properly shutdown the oVirt cluster, with Gluster and the hosted 
engine?

I.e.: I want to install a cluster of 3 servers and then send it to a remote 
office. How do I do it properly? I noticed that glusterd is not enabled to 
start automatically. And how do I deal with the hosted engine?


Thanks,

Moacir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi, in-line responses.


Thanks,

Moacir


From: Yaniv Kaul <yk...@redhat.com>
Sent: Monday, August 7, 2017 7:42 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices



On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira 
<moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>> wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).

Very nice gear. How are you planning the network exactly? Without a switch, 
back-to-back? (sounds OK to me, just wanted to ensure this is what the 'dual' 
is used for). However, I'm unsure if you have the correct balance between the 
interface speeds (40g) and the disks (too many HDDs?).

Moacir: The idea is to have a very high performance network for the distributed 
file system and to prevent bottlenecks when we move one VM from a node to 
another. Using 40Gb NICs I can just connect the servers back-to-back. In this 
case I don't need the expensive 40Gb switch, I get very high speed and no 
contention between north/south traffic with east/west.



This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

I would assume RAID 1 for the operating system (you don't want a single point 
of failure there?) and the rest JBODs. The SSD will be used for caching, I 
reckon? (I personally would add more SSDs instead of HDDs, but it does depend 
on the disk sizes and your space requirements.

Moacir: Yes, I agree that I need a RAID-1 for the OS. Now, generic JBOD or a 
JBOD assembled using RAID-5 "disks" created by the server's disk controller?


2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

Replica 2 + Arbiter sounds good to me.
Moacir: I agree, and that is what I am using.


4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?

Yes, I believe using lvmcache is the best way to go.

Moacir: Are you sure? I say that because the qcow2 files will be quite big. So 
if tiering is "file based" the SSD would have to be very, very big unless 
Gluster tiering do it by "chunks of data".


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?

Don't forget jumbo frames. libgfapi (coming hopefully in 4.1.5). Sharding 
(enabled out of the box if you use a hyper-converged setup via gdeploy).
Moacir: Yes! This is another reason to have separate networks for north/south 
and east/west. In that way I can use the standard MTU on the 10Gb NICs and 
jumbo frames on the file/move 40Gb NICs.

Y.



You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Devin,


Many, many thaks for your response. I will read the doc you sent and if I still 
have questions I will post them here.


But why would I use a RAIDed brick if Gluster, by itself, already "protects" 
the data by making replicas. You see, that is what is confusing to me...


Thanks,

Moacir



From: Devin Acosta <de...@pabstatencio.com>
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices


Moacir,

I have recently installed multiple Red Hat Virtualization hosts for several 
different companies, and have dealt with the Red Hat Support Team in depth 
about optimal configuration in regards to setting up GlusterFS most efficiently 
and I wanted to share with you what I learned.

In general Red Hat Virtualization team frowns upon using each DISK of the 
system as just a JBOD, sure there is some protection by having the data 
replicated, however, the recommendation is to use RAID 6 (preferred) or RAID-5, 
or at least RAID-1 at the very least.

Here is the direct quote from Red Hat when I asked about RAID and Bricks:

"A typical Gluster configuration would use RAID underneath the bricks. RAID 6 
is most typical as it gives you 2 disk failure protection, but RAID 5 could be 
used too. Once you have the RAIDed bricks, you'd then apply the desired 
replication on top of that. The most popular way of doing this would be 
distributed replicated with 2x replication. In general you'll get better 
performance with larger bricks. 12 drives is often a sweet spot. Another option 
would be to create a separate tier using all SSD’s.”

In order to SSD tiering from my understanding you would need 1 x NVMe drive in 
each server, or 4 x SSD hot tier (it needs to be distributed, replicated for 
the hot tier if not using NVME). So with you only having 1 SSD drive in each 
server, I’d suggest maybe looking into the NVME option.

Since your using only 3-servers, what I’d probably suggest is to do (2 Replicas 
+ Arbiter Node), this setup actually doesn’t require the 3rd server to have big 
drives at all as it only stores meta-data about the files and not actually a 
full copy.

Please see the attached document that was given to me by Red Hat to get more 
information on this. Hope this information helps you.


--

Devin Acosta, RHCA, RHVCA
Red Hat Certified Architect


On August 6, 2017 at 7:29:29 PM, Moacir Ferreira 
(moacirferre...@hotmail.com<mailto:moacirferre...@hotmail.com>) wrote:

I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?


You opinion/feedback will be really appreciated!

Moacir

___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Good practices

2017-08-06 Thread Moacir Ferreira
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU 
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS 
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb 
NIC. So my intention is to create a loop like a server triangle using the 40Gb 
NICs for virtualization files (VMs .qcow2) access and to move VMs around the 
pod (east /west traffic) while using the 10Gb interfaces for giving services to 
the outside world (north/south traffic).


This said, my first question is: How should I deploy GlusterFS in such oVirt 
scenario? My questions are:


1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then 
create a GlusterFS using them?

2 - Instead, should I create a JBOD array made of all server's disks?

3 - What is the best Gluster configuration to provide for HA while not 
consuming too much disk space?

4 - Does a oVirt hypervisor pod like I am planning to build, and the 
virtualization environment, benefits from tiering when using a SSD disk? And 
yes, will Gluster do it by default or I have to configure it to do so?


At the bottom line, what is the good practice for using GlusterFS in small pods 
for enterprises?


You opinion/feedback will be really appreciated!

Moacir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users