Hi Yaniv

Sorry for the delay in responding, I wanted to run this by our team to
make sure that you could benefit from their experiences as well. I have
included some responses from Ben inline (b>>>>).


On 11/20/2011 08:21 AM, Yaniv Kaul wrote:
> As we are getting some newer gear, I was wondering what is a better setup for 
> network-based storage in our lab (all our hosts will have >1 interface, and 
> our storages - either 4x1Gb or some with 10Gb).
> NFS:
> Am I correct to assume that NFS, from a single host, will not benefit much 
> from bonding, when connected to a single storage domain (= a single mount). 
> Isn't it using a single TCP connection? (which btw implies we might get 
> better perf. with multiple mounts?).

   b>>>> My own experience is that NFS like any IP app can benefit greatly from 
bonding.  The advantage of bonding is that you don't have to manually balance 
application network traffic across NICs, and it provides high-availability in 
event of port or cable failure.   I've seen up to 300 MB/s with a 4-way 1-Gbps 
bond with mode balance-alb (6) or trunking (4) or balance-xor (2).  Each mode 
has its limitations.  I have no experiencing bonding with 10-GbE but I see no 
reason why 10-GbE wouldn't work in principle.  Modes 2 (balance by TCP port + 
IP addr ) or 6 work best when a node is communicating with multiple other nodes 
in parallel.  Mode 4 requires that the network switch be configured for 
trunking, but is the least restrictive.  Mode 6 uses ARP to load balance peer 
MAC addresses across NICs, but usually works only within a single LAN.

   b>>>> Yes NFS is using a single connection from KVM host to NFS server for a 
given block device, but in my experience with NetApp at least, performance was 
pretty good even so, and it is possible to tune TCP for higher-network-latency 

   b>>> on the other hand, if you want to reserve network bandwidth for 
particular block devices, not using bonding, or using 2 pairs of bonded NICs 
could help.

> iSCSI:
> Am I better off configuring multipathing rather than bonding for iSCSI, for 
> the same reason? Configure two IPs on two interfaces on the host, same on the 
> storage? (which btw hints at why iSCSI from the QEMU level is an interesting 
> feature).
> Should I 'cheat' and configure multiple IPs on the bonded interfaces on the 
> storage side?
> For both I'm sure jumbo frames would be nice, and we will be using it 
> partially in our labs.
   b>>>> For ISCSI/TCP or NFS/TCP Jumbo frames are essential for bulk transfer 
workloads  Just be careful when using non-TCP protocols (e.g. NFS/UDP) with 
jumbo frames.

> TIA,
> Y.

vdsm-devel mailing list

Reply via email to