correction inline: On Wed, 2010-06-23 at 10:28 -0400, Christopher Barry wrote: > Hello, > > I'm implementing some code to automagically configure iscsi connections > to a proprietary array. This array has it's own specific MPIO drivers, > and does not support DM-Multipath. I'm trying to get a handle on the > differences in redundancy provided by the various layers involved in the > connection from host to array, in a generic sense. > > The array has two iSCSI ports per controller, and two controllers. The > targets can be seen through any of the ports. For simplicity, all ports > are on the same subnet. > > I'll describe a series of scenarios, and maybe someone can speak their > level of usefulness, redundancy, gotchas, nuances, etc: > > scenario #1 > Single NIC, default iface, login to all controller portals. > > scenario #2 > Dual NIC, iface per NIC, login to all controller portals from each iface > > scenario #3 > Two bonded NICs in mode balance-alb > Single NIC, default iface, login to all controller portals. single bonded interface, not single NIC. > > scenario #4 > Dual NIC, iface per NIC, MPIO driver, login to all controller portals > from each iface > > > Appreciate any advice, > Thanks, > -C >
-- You received this message because you are subscribed to the Google Groups "open-iscsi" group. To post to this group, send email to open-is...@googlegroups.com. To unsubscribe from this group, send email to open-iscsi+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/open-iscsi?hl=en.