Using a single path (without MPIO) as a baseline:

With bonding I saw, on average 99-100% of the speed (worst case 78%)
of a single path.
With MPIO (2 nics) I saw, on average 82% of the speed (worst case 66%)
of the single path.
With MPIO with one nic (ifconfig downed the second), I saw, on average
86% of the speed (worst case 66%) of the single path.

There were situations where bonding and MPIO both scored slightly
higher than the single path, but that is most likely due to
differences on the array, since the tests weren't run back to back.

--Kyle

On 1/6/10, Mike Christie <[email protected]> wrote:
> On 12/30/2009 11:48 AM, Kyle Schmitt wrote:
>> On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie<[email protected]>
>> wrote:
>>>> So far single connections work: If I setup the box to use one NIC, I
>>>> get one connection and can use it just fine.
>>> Could you send the /var/log/messages for when you run the login command
>>> so I can see the disk info?
>>
>> Sorry for the delay.  In the meanwhile I tore down the server and
>> re-configured it using ethernet bonding.  It worked, according to
>> iozone, provided moderately better throughput than the single
>> connection I got before.  Moderately.  Measurably.  Not significantly.
>>
>> I tore it down after that and reconfigured again using MPIO, and funny
>> enough, this time it worked.  I can access the lun now using two
>> devices (sdb and sdd), and both ethernet devices that connect to iscsi
>> show traffic.
>>
>> The weird thing is that aside from writing bonding was measurably
>> faster than MPIO.  Does that seem right?
>>
>
> With MPIO are you seeing the same throughput you would see if you only
> used one path at a time?
>
-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.


Reply via email to