Jesper Krogh wrote:
> Mike Christie wrote:
>   
>> Jesper Krogh wrote:
>>     
>>> Hi.
>>>
>>> I'm new to this iSCSI things. But the SAN I'm testing agains has 3 1GB
>>> interfaces.. all put together as one "portal ip-adress".
>>>
>>> I have 4 1GB interfaces in my main host. What is the preferred way of
>>> making I/O at above 100MB/s in a setup like this? Bonding the NIC's under
>>> linux or does the iSCSI-initiator need to get some details?
>>>
>>>       
>> You can also do a block level approach. If you use the current tools on 
>> open-iscsi.org, then you can setup the initiator so we create a session 
>> through each nic on the main host that connects to the one portal 
>> ip-address on the target (see the iface info in the iscsi README). Then 
>> you can use dm-multipath to round-robin over all those sessions.
>>
>> I have not done any benchmarking of dm-multipath + iscsi iface vs bonding.
>>     
>
> Sounds interesting. Thanks a lot for the information.
>
>   
I'm doing a bit of both nic bonding and dm-multipath, and I don't seem 
to get 2GB out of the dm-multipath setup, but it is certainly over 1G.  
I'd say I usually see somewhere around 1.3-1.4GB, but as always the 
blocksizes and other variables play a huge role.  For us the 
dm-multipath was the right choice so that we are able to use two 
switches and split the fabric into two networks for resiliency.  This 
could also be a bit of speed impact for me, so you might see more speed 
than I do.  If you bond the NIC's it seems to work fine too and iscsi 
does not need to know of that.  Sorry I dont' have any good benchmarks 
across just a bonded nic setup.

One interesting thing we found, though, was that with dm-multipath and 
iscsi we would get lots of path failures when the iscsi target (iet 
target in our case) ran with blockio and write-through caching.  I'm 
guessing that the dm-multipath "readsector0" and "tur" tools that check 
the paths have a limited timeout that can be exceeded when you don't 
allow cache to help out.  The real danger here is that in certain 
situations dm-multipath would fail out all possible paths, making the 
initiator very unhappy.  I don't think this is the fault of 
dm-multipath, and I didn't look very far into tuning it's timeout 
value.  When we switched to fileio and write-back caching we no longer 
had path failures - so the fix was easy, but not necessarily intuitive.

I'd be really curious to hear if anyone else sees this problem, with iet 
or with a different iscsi target.  (That's not meant to hijack this 
thread - but if it's germane I'm curious as to other multipath results)

-Ty!


-- 
-===========================-
  Ty! Boyack
  NREL Unix Network Manager
  [EMAIL PROTECTED]
  (970) 491-1186
-===========================-


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to