Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
acir Ferreira; Devin Acosta; users@ovirt.org Subject: Re: [ovirt-users] Good practices You attach the ssd as a hot tier with a gluster command. I don't think that gdeploy or ovirt gui can do it. The gluster docs and redhat docs explains tiering quite good. /Johan On August 8, 2017 07:06:

Re: [ovirt-users] Good practices

2017-08-08 Thread Pavel Gashev
Fernando, I agree that RAID is not required here by common sense. The only point to setup RAID is a lack of manageability of GlusterFS. So you just buy manageability for extra hardware cost and write performance in some scenarios. That is it. On 08/08/2017, 16:24, "users-boun...@ovirt.org on be

Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
fferences in between GlusterFS and Ceph. Can you comment? Moacir From: Johan Bernhardsson Sent: Tuesday, August 8, 2017 7:03 AM To: Moacir Ferreira; Devin Acosta; users@ovirt.org Subject: Re: [ovirt-users] Good practices You attach the ssd as a hot tier with

Re: [ovirt-users] Good practices

2017-08-08 Thread Fabrice Bacchella
> Le 8 août 2017 à 15:24, FERNANDO FREDIANI a écrit > : > > That's something on the way RAID works, regardless what most 'super-ultra' > powerfull hardware controller you may have. RAID 5 or 6 will never have the > same write performance as a RAID 10 o 0 for example. Writeback caches can > d

Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
cir Ferreira Cc: Colin Coe; users@ovirt.org Subject: Re: [ovirt-users] Good practices Moacir, I understand that if you do this type of configuration you will be severely impacted on storage performance, specially for writes. Even if you have a Hardware RAID Controller with Writeback cache

Re: [ovirt-users] Good practices

2017-08-08 Thread Karli Sjöberg
On tis, 2017-08-08 at 10:24 -0300, FERNANDO FREDIANI wrote: > That's something on the way RAID works, regardless what most  > 'super-ultra' powerfull hardware controller you may have. RAID 5 or > 6  > will never have the same write performance as a RAID 10 o 0 for > example.  > Writeback caches can

Re: [ovirt-users] Good practices

2017-08-08 Thread FERNANDO FREDIANI
ay, August 8, 2017 3:08 AM *To:* Moacir Ferreira *Cc:* Colin Coe; users@ovirt.org *Subject:* Re: [ovirt-users] Good practices Moacir, I understand that if you do this type of configuration you will be severely impacted on storage performance, specially for writes. Even if you have a Hardware

Re: [ovirt-users] Good practices

2017-08-08 Thread FERNANDO FREDIANI
That's something on the way RAID works, regardless what most 'super-ultra' powerfull hardware controller you may have. RAID 5 or 6 will never have the same write performance as a RAID 10 o 0 for example. Writeback caches can deal with bursts well but they have a limit therefore there will alway

Re: [ovirt-users] Good practices

2017-08-08 Thread Moacir Ferreira
Ok, the 40Gb NIC that I got were for free. But anyway, if you were working with 6 HDD + 1 SSD per server, then you get 21 disks on your cluster. As data in a JBOD will be built all over the network, then it can be really intensive especially depending on the number of replicas you choose for you

Re: [ovirt-users] Good practices

2017-08-08 Thread Johan Bernhardsson
ferences in between GlusterFS and Ceph. Can you comment? Moacir From: Johan Bernhardsson Sent: Tuesday, August 8, 2017 7:03 AM To: Moacir Ferreira; Devin Acosta; users@ovirt.org Subject: Re: [ovirt-users] Good practices You attach the ssd as a hot tier with

Re: [ovirt-users] Good practices

2017-08-08 Thread Fabrice Bacchella
> Le 8 août 2017 à 08:50, Yaniv Kaul a écrit : > > Storage is usually the slowest link in the chain. I personally believe that > spending the money on NVMe drives makes more sense than 40Gb (except [1], > which is suspiciously cheap!) > > Y. > [1] http://a.co/4hsCTqG h

Re: [ovirt-users] Good practices

2017-08-07 Thread Yaniv Kaul
On Tue, Aug 8, 2017 at 9:16 AM, Fabrice Bacchella < fabrice.bacche...@orange.fr> wrote: > > > Le 8 août 2017 à 04:08, FERNANDO FREDIANI a > écrit : > > > Even if you have a Hardware RAID Controller with Writeback cache you > will have a significant performance penalty and may not fully use all th

Re: [ovirt-users] Good practices

2017-08-07 Thread Yaniv Kaul
On Tue, Aug 8, 2017 at 12:03 AM, FERNANDO FREDIANI < fernando.fredi...@upx.com> wrote: > Thanks for the detailed answer Erekle. > > I conclude that it is worth in any scenario to have a arbiter node in > order to avoid wasting more disk space to RAID X + Gluster Replication on > the top of it. The

Re: [ovirt-users] Good practices

2017-08-07 Thread Fabrice Bacchella
> Le 8 août 2017 à 04:08, FERNANDO FREDIANI a écrit > : > Even if you have a Hardware RAID Controller with Writeback cache you will > have a significant performance penalty and may not fully use all the > resources you mentioned you have. > Nope again,from my experience with HP Smart Array

Re: [ovirt-users] Good practices

2017-08-07 Thread Johan Bernhardsson
From: Devin Acosta Sent: Monday, August 7, 2017 7:46 AM To: Moacir Ferreira; users@ovirt.org Subject: Re: [ovirt-users] Good practices Moacir, I have recently installed multiple Red Hat Virtualization hosts for several different companies, and have dealt with the Red Hat Support Team in depth

Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
? Thanks, Moacir From: Devin Acosta Sent: Monday, August 7, 2017 7:46 AM To: Moacir Ferreira; users@ovirt.org Subject: Re: [ovirt-users] Good practices Moacir, I have recently installed multiple Red Hat Virtualization hosts for several different companies

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
hunks of data not for the >> entire .qcow2 file. But I guess this is a problem everybody else has. So, >> Do you know how tiering works in Gluster? >> >> >> 4 - I am putting the OS on the first disk. However, would you do >> differently? >> >>

Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
s for your response! Moacir From: Colin Coe Sent: Monday, August 7, 2017 12:41 PM To: Moacir Ferreira Cc: users@ovirt.org Subject: Re: [ovirt-users] Good practices Hi I just thought that you'd do hardware RAID if you had the controller or JBOD if you didn&#

Re: [ovirt-users] Good practices

2017-08-07 Thread Erekle Magradze
Hi Fernando, Indeed, having and arbiter node is always a good idea, and it saves costs a lot. Good luck with your setup. Cheers Erekle On 07.08.2017 23:03, FERNANDO FREDIANI wrote: Thanks for the detailed answer Erekle. I conclude that it is worth in any scenario to have a arbiter node

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
Thanks for the detailed answer Erekle. I conclude that it is worth in any scenario to have a arbiter node in order to avoid wasting more disk space to RAID X + Gluster Replication on the top of it. The cost seems much lower if you consider running costs of the whole storage and compare it with

Re: [ovirt-users] Good practices

2017-08-07 Thread Erekle Magradze
Hi Fernando (sorry for misspelling your name, I used a different keyboard), So let's go with the following scenarios: 1. Let's say you have two servers (replication factor is 2), i.e. two bricks per volume, in this case it is strongly recommended to have the arbiter node, the metadata storage

Re: [ovirt-users] Good practices

2017-08-07 Thread Erekle Magradze
Hi Franando, So let's go with the following scenarios: 1. Let's say you have two servers (replication factor is 2), i.e. two bricks per volume, in this case it is strongly recommended to have the arbiter node, the metadata storage that will guarantee avoiding the split brain situation, in thi

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
What you mentioned is a specific case and not a generic situation. The main point there is that RAID 5 or 6 impacts write performance compared when you write to only 2 given disks at a time. That was the comparison made. Fernando On 07/08/2017 16:49, Fabrice Bacchella wrote: Le 7 août 2017

Re: [ovirt-users] Good practices

2017-08-07 Thread Fabrice Bacchella
>> Moacir: Yes! This is another reason to have separate networks for >> north/south and east/west. In that way I can use the standard MTU on the >> 10Gb NICs and jumbo frames on the file/move 40Gb NICs. Why not Jumbo frame every where ?___ Users mailin

Re: [ovirt-users] Good practices

2017-08-07 Thread Fabrice Bacchella
> Le 7 août 2017 à 17:41, FERNANDO FREDIANI a écrit > : > > Yet another downside of having a RAID (specially RAID 5 or 6) is that it > reduces considerably the write speeds as each group of disks will end up > having the write speed of a single disk as all other disks of that group have > t

Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
n the first disk. However, would you do differently? Moacir From: Colin Coe Sent: Monday, August 7, 2017 4:48 AM To: Moacir Ferreira Cc: users@ovirt.org Subject: Re: [ovirt-users] Good practices 1) RAID5 may be a performance hit- 2) I'd be inclined to do th

Re: [ovirt-users] Good practices

2017-08-07 Thread Yaniv Kaul
data not for the >> entire .qcow2 file. But I guess this is a problem everybody else has. So, >> Do you know how tiering works in Gluster? >> >> >> 4 - I am putting the OS on the first disk. However, would you do >> differently? >> >> >> Moacir >

Re: [ovirt-users] Good practices

2017-08-07 Thread Yaniv Kaul
On Mon, Aug 7, 2017 at 6:41 PM, FERNANDO FREDIANI wrote: > Thanks for the clarification Erekle. > > However I get surprised with this way of operating from GlusterFS as it > adds another layer of complexity to the system (either a hardware or > software RAID) before the gluster config and increas

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
Thanks for the clarification Erekle. However I get surprised with this way of operating from GlusterFS as it adds another layer of complexity to the system (either a hardware or software RAID) before the gluster config and increase the system's overall costs. An important point to consider i

Re: [ovirt-users] Good practices

2017-08-07 Thread Erekle Magradze
Hi Frenando, Here is my experience, if you consider a particular hard drive as a brick for gluster volume and it dies, i.e. it becomes not accessible it's a huge hassle to discard that brick and exchange with another one, since gluster some tries to access that broken brick and it's causing (

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
, 2017 7:42 AM *To:* Moacir Ferreira *Cc:* users@ovirt.org *Subject:* Re: [ovirt-users] Good practices On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira mailto:moacirferre...@hotmail.com>> wrote: I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU soc

Re: [ovirt-users] Good practices

2017-08-07 Thread FERNANDO FREDIANI
For any RAID 5 or 6 configuration I normally follow a simple gold rule which gave good results so far: - up to 4 disks RAID 5 - 5 or more disks RAID 6 However I didn't really understand well the recommendation to use any RAID with GlusterFS. I always thought that GlusteFS likes to work in JBOD

Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
Hi, in-line responses. Thanks, Moacir From: Yaniv Kaul Sent: Monday, August 7, 2017 7:42 AM To: Moacir Ferreira Cc: users@ovirt.org Subject: Re: [ovirt-users] Good practices On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira mailto:moacirferre...@hotmail.com

Re: [ovirt-users] Good practices

2017-08-07 Thread Colin Coe
e .qcow2 file. But I guess this is a problem everybody else has. So, > Do you know how tiering works in Gluster? > > > 4 - I am putting the OS on the first disk. However, would you do > differently? > > > Moacir > > -- > *From:* Colin Coe

Re: [ovirt-users] Good practices

2017-08-07 Thread Moacir Ferreira
s, Moacir From: Devin Acosta Sent: Monday, August 7, 2017 7:46 AM To: Moacir Ferreira; users@ovirt.org Subject: Re: [ovirt-users] Good practices Moacir, I have recently installed multiple Red Hat Virtualization hosts for several different companies, and have

Re: [ovirt-users] Good practices

2017-08-06 Thread Yaniv Kaul
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira wrote: > I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU > sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use > GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and > a dual 10Gb NI

Re: [ovirt-users] Good practices

2017-08-06 Thread Colin Coe
1) RAID5 may be a performance hit 2) I'd be inclined to do this as JBOD by creating a distributed disperse volume on each server. Something like echo gluster volume create dispersevol disperse-data 5 redundancy 2 \ $(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e "server${SERVER}:/b

[ovirt-users] Good practices

2017-08-06 Thread Moacir Ferreira
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb NIC. So my intention is to create a loop like a server triangle