On Fri, 2010-06-25 at 11:06 -0700, Patrick J. LoPresti wrote:
> On Jun 23, 12:41 pm, Christopher Barry
> <christopher.ba...@rackwareinc.com> wrote:
> >
> > Absolutely correct. What I was looking for were comparisons of the
> > methods below, and wanted subnet stuff out of the way while discussing
> > that.
> 
> Ah, I see.
> 
> Well, that is fine (even necessary) for the port bonding approach, but
> for multi-path I/O (whether device-mapper or proprietary) it will
> probably not do what you expect.  When Linux has two interfaces on the
> same subnet, in my experience it tends to send all traffic through
> just one of them.  So you will definitely want to split up the subnets
> before testing multi-path I/O.
> 
> > Here I do not understand your reasoning. My understanding was I would
> > need a session per iface to each portal to survive a controller port
> > failure. If this assumption is wrong, please explain.
> 
> I may have misunderstood your use of "portal".  I was thinking in the
> RFC 3720 sense of "IP address".
> 
> So you have four IP addresses on the RAID, and four IP addresses on
> the Linux host.  You have made all of your SCSI target devices visible
> as logical units on all four addresses on the RAID.  So to get fully
> redundant paths, you only need to connect each of the four IP
> addresses on the Linux host to a single IP address on the RAID.  (So
> Linux will see each logical unit four times.)
> 
> I thought you were saying you would initiate a connection from each
> host IP address to every RAID IP address (16 connections).  That would
> cause each each LU to show up 16 times, thus being harder to manage,
> with no advantages in performance or fault-tolerance.  But now it
> sounds like that is not what you meant :-).

Thanks for your reply Pat.

actually, you were correct in your first assumption - I was indeed
thinking that I would need to 'login' from each iface to each portal in
order for the initiator to know about all of the paths. This was likely
due to the fact that I was 'simplifying' :) by using a single subnet in
my example. In reality, there would be multiple subnets, and obviously
this could not occur (efficiently, as routing would obviously come into
play). Thank you for clearing that up for me.

At the end of the day, I am trying to automagically find the optimal
configuration for the type of storage available (i.e. does it support
the proprietary MPIO driver I need to work with, dm-multipath, or just a
straight connection), and what nics on what subnets area available on
the host, how do these relate to the portals the host can see, and if
bonding would be desirable. The matrix of possibilities is somewhat
daunting...

> 
> > this is also something I am uncertain about. For instance, in the
> > balance-alb mode, each slave will communicate with a remote ip
> > consistently. In the case of two slaves, and two portals how would the
> > traffic be apportioned? would it write to both simultaneously? could
> > this corrupt the disk in any way? would it always only use a single
> > slave/portal?
> 
> This is what I meant by being "at the mercy of the load balancing
> performed by the bonding".
> 
> If I understand the description of "balance-alb" correctly, outgoing
> traffic will be more-or-less round-robin; it tries to balance the load
> among the available interfaces, without worrying about keeping packets
> in order.  If packets wind up out of order, TCP will put them back in
> order at the other end, possibly (probably?) at the cost of some
> performance.
> 
> Inbound traffic from any particular portal will go to a single slave.
> But there is no guarantee that the traffic will then be properly
> balanced.
> 
> The advantage of multipath I/O is that it can balance the traffic at
> the level of SCSI commands.  I suspect this will be both faster and
> more consistent, but again, I have not actually tried using bonding.
> 
>  - Pat
> 



-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to