On Fri, 2010-06-11 at 23:17 +0000, Raj wrote:
> Nicholas A. Bellinger <n...@...> writes:
> 
> > Btw, just for those following along, here is what MC/S and ERL=2 when
> > used in combination (yes, they are complementary) really do:
> > 
> > http://linux-iscsi.org/builds/user/nab/Inter.vs.OuterNexus.Multiplexing.pdf
> > 
> > Also, I should mention in all fairness that my team was the first to
> > implement both a Target and Initiator capable of MC/S and
> > ErrorRecoveryLevel=2 running on Linux, and the first target capable of
> > running MC/S from multiple initiator implementations.
> > 
> 
> But the end result is what? open-iSCSI still doesn't have the MC/S even 
> though 
> it is useful? 

So without going into a multi-year history lesson as to why MC/S is not
currently supported in Open-iSCSI, what it boils down to is this:

MC/S (or InterNexus multiplexing) is very useful for scaling iSCSI
sessions across multiple cores and network ports with minimal overhead
compared with traditional link layer bonding, not to mention that
individual iSCSI TCP connections can be independently going across
multiple different subnets and backbone providers.  It also provides
less overhead and fewer kernel threads context switches than host OS
dependent MPIO and allows for the ability to scale the number of LUNs
and bandwith of individual LUNs across groups of TCP TX/RX kernel
threads (for a single iSCSI session (I_T Nexus).

MC/S is complementary to legacy symmetric and new SPC-4 ALUA MPIO, and
can be used together following with RFC-3720.  MC/S functions
independently of the negotiated ErrorRecoveryLevel, but if one
connection fails in ERL=0 then all of the connections must be restarted
with session reinstatement (restart I_T Nexus).

The Core-iSCSI MC/S Initiator has existed out of tree for Linux v2.6 for
about 5 years now, and has been ported to a number of different embedded
devices and server class architectures.    MC/S support is included in
every install of MSFT including their Hyper-V offering (and is used for
the 1 million IOPs windows iSCSI initiator).  MC/S is also now becoming
a hard requirement for hypervisor level iSCSI initiators (either KVM or
based on Linux) in order to scale bandwith across a number of CPUs cores
on small pipes (4x 1 Gb/sec) and on bigger pipes (10 Gb/sec) with
traditional iSCSI.

MC/S is available from select iSCSI target implementations, including
all versions of the open source LIO-Target stack and TCM fabric module
(v2.x, v3.x and v4.x-rc) and from certain Netapp hardware and probably
other big array vendors who compete with Netapp at this point.

In terms of implementing MC/S support into Open-iSCSI, the initiator
side fast path for MC/S is straight forward enough, eg: once the CmdSN
window has opened, assign the incoming struct scsi_cmnd alligence to one
active iSCSI connection, and queue up to that connections TX thread.  I
will say the user <-> kernel netlink design for the control path does
make this a little more interesting, becase also with MC/S iSCSI
connections can be brought up and down on the fly at any time, which
also adds a certain level of complexity in the shutdown path for an
iSCSI Initiator.

Anyways, this is something that I have been considering for a while, and
I think it will eventually happen (either by myself or someone else) who
is doing bringup against existing stable LIO-Target MC/S logic.

Best,

--nab

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to