On Thu, Dec 31, 2009 at 8:23 AM, Pasi Kärkkäinen <pa...@iki.fi> wrote:
> If you have just a single iscsi connection/login from the initiator to the
> target, then you'll have only one tcp connection, and that means bonding
> won't help you at all - you'll be only able to utilize one link of the
> bond.
> bonding needs multiple tcp/ip connections for being able to give more
> bandwidth.

That's what I thought, but I figured it was one of the following three
MPIO was (mis)configured and using more overhead than bonding
OR the initiator was firing multiple concurrent requests (which you
say it doesn't, I'll believe you)
OR the san was under massively different load between the test runs
(not too likely, but possible.  Only one other lun is in use).

> That seems a bit weird.
That's what I thought, otherwise I would have just gone with it.

> How did you configure multipath? Please paste your multipath settings.
> -- Pasi

Here's the /etc/multipath.conf.  Where there other config options that
you'd need to see?

devnode_blacklist {
devnode "^sda[0-9]*"
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
devices {
device {
vendor "EMC "
product "SYMMETRIX"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_selector "round-robin 0"
features "0"
hardware_handler "0"
failback immediate
device {
vendor "DGC"
product "*"
path_grouping_policy group_by_prio
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_emc /dev/%n"
hardware_handler "1 emc"
features "1 queue_if_no_path"
no_path_retry 300
path_checker emc_clariion
failback immediate


You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to