[ovirt-users] Storage performance comparison (gluster vs FC)

2020-02-26 Thread Rik Theys
Hi,

We currently use oVirt on two hosts that connect to a shared storage
using SAS. In oVirt this is a "FC" storage domain. Since the warrantly
on the storage box is ending, we are looking at alternatives.

One of the options would be to use gluster and use a "hyperconverged"
setup where compute and gluster are on the same hosts. We would probably
end up with 3 hosts and a "replica 3 arbiter 1" gluster volume. (Or is
another volume type more recommended for this type of setup?)

I was wondering what the expected performance would be of this type of
setup compared to a shared storage over FC. I expect the I/O latency of
gluster to be much higher than the latency for the SAS connected storage
box? Has anybody compared these storage setups?

Regards,

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6HQF3N2PJRQY6T27LEKR3ZFMO62MUFC/


Re: [ovirt-users] Storage Performance

2017-10-26 Thread Yaniv Kaul
On Oct 26, 2017 6:16 PM, "Bryan Sockel"  wrote:

My server load is pretty light.  Currently there is no more than 15-20 VM's
running on my ovirt config.  Attached are performance testing results on
the local host and from a windows box.  I have also attached my gluster
volume configuration.   There seems to be a significant performance loss
between server performance and guest performance.  Not sure if it is
related to my limited bandwidth, mismatched server hardware or something
else.


I would begin by comparing the host performance with a Linux guest, then
move on to Windows. Also, please ensure the rng driver is installed (I
assume you already use virtio or virtio-scsi).
Y.



When not performing any major disk intensive activities system typically
runs at less than 10 MiB/s.  Network activity about 100 Mbps.

Does anyone know if there is any sort of best practices document when it
comes to hardware setup for gluster? Specifically related to hardware raid
vs JBOD, stripe size, brick configuration etc.

Thanks
Bryan



-Original Message-
From: Juan Pablo 
To: Bryan Sockel 
Cc: "users@ovirt.org" 
Date: Thu, 26 Oct 2017 09:59:16 -0300
Subject: Re: [ovirt-users] Storage Performance

Hi, can you check IOPS? and state # of VM's ? do : iostat -x 1 for a while
=)

Isnt RAID discouraged ? AFAIK gluster likes JBOD, am I wrong?


regards,
JP

2017-10-25 12:05 GMT-03:00 Bryan Sockel :
>
> Have a question in regards to storage performance.  I have a gluster
> replica 3 volume that we are testing for performance.  In my current
> configuration is 1 server has 16X1.2TB( 10K 2.5 Inch) drives configured in
> Raid 10 with a 256k stripe.  My 2nd server is configured with 4X6TB (3.5
> Inch Drives) configured Raid 10 with a 256k stripe.  Each server is
> configured with 802.3 Bond (4X1GB) network links.  Each server is
> configured with write-back on the raid controller.
>
> I am seeing a lot of network usage (solid 3 Gbps) when i perform file
> copies on the vm attached to that gluster volume,  But i see spikes on the
> disk io when watching the dashboard through the cockpit interface.  I
> spikes are up to 1.5 Gbps, but i would say the average through put is maybe
> 256 Mbps.
>
> Is this to be expected, or should it be a solid activity in the graphs for
> disk IO.  Is it better to use a 256K stripe or a 512 strip on the hardware
> raid configuration?
>
> Eventually i plan on having the hardware match up for better performance.
>
>
> Thanks
>
> Bryan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage Performance

2017-10-26 Thread FERNANDO FREDIANI
That was my impression too, but unfortunately someone told in this mail 
list recently that Gluster isn't that clever to work without RAID 
Controllers and when disks fail it imposes some difficulties for 
replacement. Perhaps someone with more knowledge could clarify this 
point which certainly is beneficial to people.


Fernando


On 26/10/2017 10:59, Juan Pablo wrote:
Hi, can you check IOPS? and state # of VM's ? do : iostat -x 1 for a 
while =)


Isnt RAID discouraged ? AFAIK gluster likes JBOD, am I wrong?


regards,
JP

2017-10-25 12:05 GMT-03:00 Bryan Sockel >:


Have a question in regards to storage performance.  I have a
gluster replica 3 volume that we are testing for performance.  In
my current configuration is 1 server has 16X1.2TB( 10K 2.5
Inch) drives configured in Raid 10 with a 256k stripe. My 2nd
server is configured with 4X6TB (3.5 Inch Drives) configured Raid
10 with a 256k stripe.  Each server is configured with 802.3 Bond
(4X1GB) network links.  Each server is configured with write-back
on the raid controller.
I am seeing a lot of network usage (solid 3 Gbps) when i perform
file copies on the vm attached to that gluster volume,  But i see
spikes on the disk io when watching the dashboard through the
cockpit interface.  I spikes are up to 1.5 Gbps, but i would say
the average through put is maybe 256 Mbps.
Is this to be expected, or should it be a solid activity in the
graphs for disk IO.  Is it better to use a 256K stripe or a 512
strip on the hardware raid configuration?
Eventually i plan on having the hardware match up for better
performance.
Thanks
Bryan

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage Performance

2017-10-26 Thread Juan Pablo
Hi, can you check IOPS? and state # of VM's ? do : iostat -x 1 for a while
=)

Isnt RAID discouraged ? AFAIK gluster likes JBOD, am I wrong?


regards,
JP

2017-10-25 12:05 GMT-03:00 Bryan Sockel :

> Have a question in regards to storage performance.  I have a gluster
> replica 3 volume that we are testing for performance.  In my current
> configuration is 1 server has 16X1.2TB( 10K 2.5 Inch) drives configured in
> Raid 10 with a 256k stripe.  My 2nd server is configured with 4X6TB (3.5
> Inch Drives) configured Raid 10 with a 256k stripe.  Each server is
> configured with 802.3 Bond (4X1GB) network links.  Each server is
> configured with write-back on the raid controller.
>
> I am seeing a lot of network usage (solid 3 Gbps) when i perform file
> copies on the vm attached to that gluster volume,  But i see spikes on the
> disk io when watching the dashboard through the cockpit interface.  I
> spikes are up to 1.5 Gbps, but i would say the average through put is maybe
> 256 Mbps.
>
> Is this to be expected, or should it be a solid activity in the graphs for
> disk IO.  Is it better to use a 256K stripe or a 512 strip on the hardware
> raid configuration?
>
> Eventually i plan on having the hardware match up for better performance.
>
>
> Thanks
>
> Bryan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Storage Performance

2017-10-25 Thread Bryan Sockel
Have a question in regards to storage performance.  I have a gluster replica 
3 volume that we are testing for performance.  In my current configuration 
is 1 server has 16X1.2TB( 10K 2.5 Inch) drives configured in Raid 10 with a 
256k stripe.  My 2nd server is configured with 4X6TB (3.5 Inch Drives) 
configured Raid 10 with a 256k stripe.  Each server is configured with 802.3 
Bond (4X1GB) network links.  Each server is configured with write-back on 
the raid controller.

I am seeing a lot of network usage (solid 3 Gbps) when i perform file copies 
on the vm attached to that gluster volume,  But i see spikes on the disk io 
when watching the dashboard through the cockpit interface.  I spikes are up 
to 1.5 Gbps, but i would say the average through put is maybe 256 Mbps.

Is this to be expected, or should it be a solid activity in the graphs for 
disk IO.  Is it better to use a 256K stripe or a 512 strip on the hardware 
raid configuration?

Eventually i plan on having the hardware match up for better performance.


Thanks

Bryan___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users