8, 2017 7:03 AM
To: Moacir Ferreira; Devin Acosta; users@ovirt.org
Subject: Re: [ovirt-users] Good practices
You attach the ssd as a hot tier with a gluster command. I don't think that
gdeploy or ovirt gui can do it.
The gluster docs and redhat docs explains tiering quite good.
/Johan
On Augu
Fernando,
I agree that RAID is not required here by common sense. The only point to setup
RAID is a lack of manageability of GlusterFS. So you just buy manageability for
extra hardware cost and write performance in some scenarios. That is it.
On 08/08/2017, 16:24, "users-boun...@ovirt.org on
l bricks. But
if I am not wrong, this is one of key differences in between GlusterFS and
Ceph. Can you comment?
Moacir
From: Johan Bernhardsson <jo...@kafit.se>
Sent: Tuesday, August 8, 2017 7:03 AM
To: Moacir Ferreira; Devin Acosta; users@ovirt.org
Subject:
> Le 8 août 2017 à 15:24, FERNANDO FREDIANI a écrit
> :
>
> That's something on the way RAID works, regardless what most 'super-ultra'
> powerfull hardware controller you may have. RAID 5 or 6 will never have the
> same write performance as a RAID 10 o 0 for
di...@upx.com>
Sent: Tuesday, August 8, 2017 3:08 AM
To: Moacir Ferreira
Cc: Colin Coe; users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Moacir, I understand that if you do this type of configuration you will be
severely impacted on storage performance, specially for writes. Even if you
On tis, 2017-08-08 at 10:24 -0300, FERNANDO FREDIANI wrote:
> That's something on the way RAID works, regardless what most
> 'super-ultra' powerfull hardware controller you may have. RAID 5 or
> 6
> will never have the same write performance as a RAID 10 o 0 for
> example.
> Writeback caches
ERNANDO FREDIANI <fernando.fredi...@upx.com>
*Sent:* Tuesday, August 8, 2017 3:08 AM
*To:* Moacir Ferreira
*Cc:* Colin Coe; users@ovirt.org
*Subject:* Re: [ovirt-users] Good practices
Moacir, I understand that if you do this type of configuration you
will be severely impacted on storage
That's something on the way RAID works, regardless what most
'super-ultra' powerfull hardware controller you may have. RAID 5 or 6
will never have the same write performance as a RAID 10 o 0 for example.
Writeback caches can deal with bursts well but they have a limit
therefore there will
Ok, the 40Gb NIC that I got were for free. But anyway, if you were working with
6 HDD + 1 SSD per server, then you get 21 disks on your cluster. As data in a
JBOD will be built all over the network, then it can be really intensive
especially depending on the number of replicas you choose for
wrong, this is one of key differences in
between GlusterFS and Ceph. Can you comment?
Moacir
From: Johan Bernhardsson <jo...@kafit.se>
Sent: Tuesday, August 8, 2017 7:03 AM
To: Moacir Ferreira; Devin Acosta; users@ovirt.org
Subject: Re: [ovirt-users] Goo
> Le 8 août 2017 à 08:50, Yaniv Kaul a écrit :
>
> Storage is usually the slowest link in the chain. I personally believe that
> spending the money on NVMe drives makes more sense than 40Gb (except [1],
> which is suspiciously cheap!)
>
> Y.
> [1] http://a.co/4hsCTqG
On Tue, Aug 8, 2017 at 9:16 AM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:
>
> > Le 8 août 2017 à 04:08, FERNANDO FREDIANI a
> écrit :
>
> > Even if you have a Hardware RAID Controller with Writeback cache you
> will have a significant performance penalty
On Tue, Aug 8, 2017 at 12:03 AM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:
> Thanks for the detailed answer Erekle.
>
> I conclude that it is worth in any scenario to have a arbiter node in
> order to avoid wasting more disk space to RAID X + Gluster Replication on
> the top of it.
> Le 8 août 2017 à 04:08, FERNANDO FREDIANI a écrit
> :
> Even if you have a Hardware RAID Controller with Writeback cache you will
> have a significant performance penalty and may not fully use all the
> resources you mentioned you have.
>
Nope again,from my
Moacir
From: Devin Acosta <de...@pabstatencio.com>
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Moacir,
I have recently installed multiple Red Hat Virtualization hosts for several
different compani
?
Thanks,
Moacir
From: Devin Acosta <de...@pabstatencio.com>
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Moacir,
I have recently installed multiple Red Hat Virtualization hosts for s
n@gmail.com>
> *Sent:* Monday, August 7, 2017 12:41 PM
>
> *To:* Moacir Ferreira
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
> Hi
>
> I just thought that you'd do hardware RAID if you had the controller or
> JBOD if you didn't. In hindsi
response!
Moacir
From: Colin Coe <colin@gmail.com>
Sent: Monday, August 7, 2017 12:41 PM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Hi
I just thought that you'd do hardware RAID if you had the controller or JBOD i
Hi Fernando,
Indeed, having and arbiter node is always a good idea, and it saves
costs a lot.
Good luck with your setup.
Cheers
Erekle
On 07.08.2017 23:03, FERNANDO FREDIANI wrote:
Thanks for the detailed answer Erekle.
I conclude that it is worth in any scenario to have a arbiter node
Thanks for the detailed answer Erekle.
I conclude that it is worth in any scenario to have a arbiter node in
order to avoid wasting more disk space to RAID X + Gluster Replication
on the top of it. The cost seems much lower if you consider running
costs of the whole storage and compare it
Hi Fernando (sorry for misspelling your name, I used a different keyboard),
So let's go with the following scenarios:
1. Let's say you have two servers (replication factor is 2), i.e. two
bricks per volume, in this case it is strongly recommended to have the
arbiter node, the metadata storage
Hi Franando,
So let's go with the following scenarios:
1. Let's say you have two servers (replication factor is 2), i.e. two
bricks per volume, in this case it is strongly recommended to have the
arbiter node, the metadata storage that will guarantee avoiding the
split brain situation, in
What you mentioned is a specific case and not a generic situation. The
main point there is that RAID 5 or 6 impacts write performance compared
when you write to only 2 given disks at a time. That was the comparison
made.
Fernando
On 07/08/2017 16:49, Fabrice Bacchella wrote:
Le 7 août
>> Moacir: Yes! This is another reason to have separate networks for
>> north/south and east/west. In that way I can use the standard MTU on the
>> 10Gb NICs and jumbo frames on the file/move 40Gb NICs.
Why not Jumbo frame every where ?___
Users
> Le 7 août 2017 à 17:41, FERNANDO FREDIANI a écrit
> :
>
> Yet another downside of having a RAID (specially RAID 5 or 6) is that it
> reduces considerably the write speeds as each group of disks will end up
> having the write speed of a single disk as all other
on the first disk. However, would you do differently?
Moacir
From: Colin Coe <colin@gmail.com>
Sent: Monday, August 7, 2017 4:48 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices
1) RAID5 may be a performance hit-
ot for the
>> entire .qcow2 file. But I guess this is a problem everybody else has. So,
>> Do you know how tiering works in Gluster?
>>
>>
>> 4 - I am putting the OS on the first disk. However, would you do
>> differently?
>>
>>
>> Moacir
On Mon, Aug 7, 2017 at 6:41 PM, FERNANDO FREDIANI wrote:
> Thanks for the clarification Erekle.
>
> However I get surprised with this way of operating from GlusterFS as it
> adds another layer of complexity to the system (either a hardware or
> software RAID) before
Thanks for the clarification Erekle.
However I get surprised with this way of operating from GlusterFS as it
adds another layer of complexity to the system (either a hardware or
software RAID) before the gluster config and increase the system's
overall costs.
An important point to consider
Hi Frenando,
Here is my experience, if you consider a particular hard drive as a
brick for gluster volume and it dies, i.e. it becomes not accessible
it's a huge hassle to discard that brick and exchange with another one,
since gluster some tries to access that broken brick and it's causing
*Sent:* Monday, August 7, 2017 7:42 AM
*To:* Moacir Ferreira
*Cc:* users@ovirt.org
*Subject:* Re: [ovirt-users] Good practices
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira
<moacirferre...@hotmail.com <mailto:moacirferre...@hotmail.com>> wrote:
I am willing to assemble a oVir
For any RAID 5 or 6 configuration I normally follow a simple gold rule
which gave good results so far:
- up to 4 disks RAID 5
- 5 or more disks RAID 6
However I didn't really understand well the recommendation to use any
RAID with GlusterFS. I always thought that GlusteFS likes to work in
Hi, in-line responses.
Thanks,
Moacir
From: Yaniv Kaul <yk...@redhat.com>
Sent: Monday, August 7, 2017 7:42 AM
To: Moacir Ferreira
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Good practices
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira
<mo
t;colin@gmail.com>
> *Sent:* Monday, August 7, 2017 4:48 AM
> *To:* Moacir Ferreira
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Good practices
>
> 1) RAID5 may be a performance hit-
>
> 2) I'd be inclined to do this as JBOD by creating a distributed d
s,
Moacir
From: Devin Acosta <de...@pabstatencio.com>
Sent: Monday, August 7, 2017 7:46 AM
To: Moacir Ferreira; users@ovirt.org
Subject: Re: [ovirt-users] Good practices
Moacir,
I have recently installed multiple Red Hat Virtualization hosts for se
On Sun, Aug 6, 2017 at 5:49 PM, Moacir Ferreira
wrote:
> I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU
> sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use
> GlusterFS to provide HA for the VMs. The 3 servers have a dual
1) RAID5 may be a performance hit
2) I'd be inclined to do this as JBOD by creating a distributed disperse
volume on each server. Something like
echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use GlusterFS
to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dual 10Gb
NIC. So my intention is to create a loop like a server triangle
38 matches
Mail list logo