I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC and
a QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives.  At
present, the I have 5 VLANs, each with their own subnet as:

 1. my "main" net (VLAN 1,
 2. ovirtmgmt (VLAN 100,
 3. four storage nets (VLANs 101-104, -

On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an IP
address for each of the four storage nets giving me:

  * bond0.101@bond0:
  * bond0.102@bond0:
  * bond0.103@bond0:
  * bond0.104@bond0:

The hosts are similar, but with all four NICs enslaved into a 802.3ad LAG:

Host 1:

  * bond0.101@bond0:
  * bond0.102@bond0:
  * bond0.103@bond0:
  * bond0.104@bond0:

Host 2:

  * bond0.101@bond0:
  * bond0.102@bond0:
  * bond0.103@bond0:
  * bond0.104@bond0:

I believe my performance could be better though.  While running bonnie++
on a VM, the NAS reports top disk throughput around 70MB/s and the
network (both NICs) topping out around 90MB/s.  I suspect I'm being hurt
by the load balancing across the NICs.  I've played with various load
balancing options for the LAGs (src-dst-ip and src-dst-mac) but with
little difference in effect.  Watching the resource monitor on the NAS,
I can see that one NIC almost exclusive does transmits while the other
is almost exclusively receives.  Here's the bonnie report (my apologies
to those reading plain-text here):

Bonnie++ Benchmark results
*Version 1.97*  *Sequential Output*     *Sequential Input*      *Random
        *Sequential Create*     *Random Create*

        Size    Per Char        Block   Rewrite         Per Char        Block   
Num Files       Create
Read    Delete  Create  Read    Delete

        K/sec   % CPU   K/sec   % CPU   K/sec   % CPU   K/sec   % CPU   K/sec   
/sec    % CPU   
        /sec    % CPU   /sec    % CPU   /sec    % CPU   /sec    % CPU   /sec    
% CPU   /sec
unamed  4G      267     97      75284   21      22775   8       718     97      
43559   7       189.5   8
16      6789    60      +++++   +++     24948   75      14792   86      +++++   
+++     18163   51
Latency         69048us         754ms   898ms   61246us         311ms   1126ms  
Latency         33937us
        1132us  1299us  528us   22us    458us

I keep seeing MPIO mentioned for iSCSI deployments and now I'm trying to
get my head around how to best set that up or to even know if it would
be helpful.  I only have one switch (a Catalyst 3750g) in this small
setup so fault tolerance at that level isn't a goal.

So... what would the recommendation be?  I've never done MPIO before but
know where it's at in the web UI at least.

John Florian

Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
List Archives: 

Reply via email to