Not sure if anyone was still following this but the fix ended up being
updating the Ovirt Virt Hosts to v4.2

On Tue, Feb 27, 2018 at 8:03 AM, Ryan Wilkinson <ryanw...@gmail.com> wrote:

> All volumes are configured as replica 3.  I have no arbiter volumes.
> Storage hosts are for storage only and Virt hosts are dedicated Virt
> hosts.  I've checked throughput from the Virt hosts to all 3 gluster hosts
> and am getting ~9Gb/s.
>
>
> On Tue, Feb 27, 2018 at 1:33 AM, Alex K <rightkickt...@gmail.com> wrote:
>
>> What is your gluster setup? Please share volume details where vms ate
>> stored. It could be that the slow host is having arbiter volume.
>>
>> Alex
>>
>> On Feb 26, 2018 13:46, "Ryan Wilkinson" <ryanw...@gmail.com> wrote:
>>
>>> Here is info. about the Raid controllers.  Doesn't seem to be the
>>> culprit.
>>>
>>> Slow host:
>>> Name PERC H710 Mini (Embedded)
>>> Firmware Version 21.3.4-0001
>>> Cache Memory Size 512 MB
>>> Fast Host:
>>>
>>> Name PERC H310 Mini (Embedded)
>>> Firmware Version 20.12.1-0002
>>> Cache Memory Size 0 MB
>>> Slow host:
>>> Name PERC H310 Mini (Embedded)
>>> Firmware Version 20.13.1-0002
>>> Cache Memory Size 0 MB
>>> Slow host:
>>> Name PERC H310 Mini (Embedded)
>>> Firmware Version 20.13.3-0001 Cache Memory Size 0 MB
>>> Slow Host:
>>> Name PERC H710 Mini (Embedded)
>>> Firmware Version 21.3.5-0002
>>> Cache Memory Size 512 MB
>>> Fast Host
>>> Perc H730
>>> Cache Memory Size 1GB
>>>
>>> On Mon, Feb 26, 2018 at 9:42 AM, Alvin Starr <al...@netvel.net> wrote:
>>>
>>>> I would be really supprised if the problem was related to Idrac.
>>>>
>>>> The Idrac processor is a stand alone cpu with its own nic and runs
>>>> independent of the main CPU.
>>>>
>>>> That being said it does have visibility into the whole system.
>>>>
>>>> try using dmidecode to compare the systems and take a close look at the
>>>> raid controllers and what size and form of cache they have.
>>>>
>>>> On 02/26/2018 11:34 AM, Ryan Wilkinson wrote:
>>>>
>>>> I've tested about 12 different Dell servers.  Ony a couple of them have
>>>> Idrac express and all the others have Idrac Enterprise.  All the boxes with
>>>> Enterprise perform poorly and the couple that have express perform well.  I
>>>> use the disks in raid mode on all of them.  I've tried a few non-Dell boxes
>>>> and they all perform well even though some of them are very old.  I've also
>>>> tried disabling Idrac, the Idrac nic, virtual storage for Idrac with no
>>>> sucess..
>>>>
>>>> On Mon, Feb 26, 2018 at 9:28 AM, Serkan Çoban <cobanser...@gmail.com>
>>>> wrote:
>>>>
>>>>> I don't think it is related with iDRAC itself but some configuration
>>>>> is wrong or there is some hw error.
>>>>> Did you check battery of raid controller? Do you use disks in jbod
>>>>> mode or raid mode?
>>>>>
>>>>> On Mon, Feb 26, 2018 at 6:12 PM, Ryan Wilkinson <ryanw...@gmail.com>
>>>>> wrote:
>>>>> > Thanks for the suggestion.  I tried both of these with no difference
>>>>> in
>>>>> > performance.I have tried several other Dell hosts with Idrac
>>>>> Enterprise and
>>>>> > getting the same results.  I also tried a new Dell T130 with Idrac
>>>>> express
>>>>> > and was getting over 700 MB/s.  Any other users had this issues with
>>>>> Idrac
>>>>> > Enterprise??
>>>>> >
>>>>> >
>>>>> > On Thu, Feb 22, 2018 at 12:16 AM, Serkan Çoban <
>>>>> cobanser...@gmail.com>
>>>>> > wrote:
>>>>> >>
>>>>> >> "Did you check the BIOS/Power settings? They should be set for high
>>>>> >> performance.
>>>>> >> Also you can try to boot "intel_idle.max_cstate=0" kernel command
>>>>> line
>>>>> >> option to be sure CPUs not entering power saving states.
>>>>> >>
>>>>> >> On Thu, Feb 22, 2018 at 9:59 AM, Ryan Wilkinson <ryanw...@gmail.com
>>>>> >
>>>>> >> wrote:
>>>>> >> >
>>>>> >> >
>>>>> >> > I have a 3 host gluster replicated cluster that is providing
>>>>> storage for
>>>>> >> > our
>>>>> >> > RHEV environment.  We've been having issues with inconsistent
>>>>> >> > performance
>>>>> >> > from the VMs depending on which Hypervisor they are running on.
>>>>> I've
>>>>> >> > confirmed throughput to be ~9Gb/s to each of the storage hosts
>>>>> from the
>>>>> >> > hypervisors.  I'm getting ~300MB/s disk read spead when our test
>>>>> vm is
>>>>> >> > on
>>>>> >> > the slow Hypervisors and over 500 on the faster ones.  The
>>>>> performance
>>>>> >> > doesn't seem to be affected much by the cpu, memory that are in
>>>>> the
>>>>> >> > hypervisors.  I have tried a couple of really old boxes and got
>>>>> over 500
>>>>> >> > MB/s.  The common thread seems to be that the poorly perfoming
>>>>> hosts all
>>>>> >> > have Dell's Idrac 7 Enterprise.  I have one Hypervisor that has
>>>>> Idrac 7
>>>>> >> > express and it performs well.  We've compared system packages and
>>>>> >> > versions
>>>>> >> > til we're blue in the face and have been struggling with this for
>>>>> a
>>>>> >> > couple
>>>>> >> > months but that seems to be the only common denominator.  I've
>>>>> tried on
>>>>> >> > one
>>>>> >> > of those Idrac 7 hosts to disable the nic, virtual drive, etc,
>>>>> etc. but
>>>>> >> > no
>>>>> >> > change in performance.  In addition, I tried 5 new hosts and all
>>>>> are
>>>>> >> > complying to the Idrac enterprise theory.  Anyone else had this
>>>>> issue?!
>>>>> >> >
>>>>> >> >
>>>>> >> >
>>>>> >> > _______________________________________________
>>>>> >> > Gluster-users mailing list
>>>>> >> > Gluster-users@gluster.org
>>>>> >> > http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>> >
>>>>> >
>>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing 
>>>> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>>
>>>> --
>>>> Alvin Starr                   ||   land:  (905)513-7688 <(905)%20513-7688>
>>>> Netvel Inc.                   ||   Cell:  (416)806-0133 
>>>> <(416)%20806-0133>al...@netvel.net              ||
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to