On Fri, Jun 11, 2010 at 03:30:26PM -0400, Miles Nordin wrote:
> >>>>> "pk" == Pasi Kärkkäinen <pa...@iki.fi> writes:
> 
>     >>> You're really confused, though I'm sure you're going to deny
>     >>> it.
> 
>     >>  I don't think so.  I think that it is time to reset and reboot
>     >> yourself on the technology curve.  FC semantics have been
>     >> ported onto ethernet.  This is not your grandmother's ethernet
>     >> but it is capable of supporting both FCoE and normal IP
>     >> traffic.  The FCoE gets per-stream QOS similar to what you are
>     >> used to from Fibre Channel.
> 
> FCoE != iSCSI.
> 
> FCoE was not being discussed in the part you're trying to contradict.
> If you read my entire post, I talk about FCoE at the end and say more
> or less ``I am talking about FCoE here only so you don't try to throw
> out my entire post by latching onto some corner case not applying to
> the OP by dragging FCoE into the mix'' which is exactly what you did.
> I'm guessing you fired off a reply without reading the whole thing?
> 
>     pk> Yeah, today enterprise iSCSI vendors like Equallogic (bought
>     pk> by Dell) _recommend_ using flow control. Their iSCSI storage
>     pk> arrays are designed to work properly with flow control and
>     pk> perform well.
> 
>     pk> Of course you need a proper ("certified") switches aswell.
> 
>     pk> Equallogic says the delays from flow control pause frames are
>     pk> shorter than tcp retransmits, so that's why they're using and
>     pk> recommending it.
> 
> please have a look at the three links I posted about flow control not
> being used the way you think it is by any serious switch vendor, and
> the explanation of why this limitation is fundamental, not something
> that can be overcome by ``technology curve.''  It will not hurt
> anything to allow autonegotiation of flow control on non-broken
> switches so I'm not surprised they recommend it with ``certified''
> known-non-broken switches, but it also will not help unless your
> switches have input/backplane congestion which they usually don't, or
> your end host is able to generate PAUSE frames for PCIe congestion
> which is maybe more plausible.  In particular it won't help with the
> typical case of the ``incast'' problem in the experiment in the FAST
> incast paper URL I gave, because they narrowed down what was happening
> in their experiment to OUTPUT queue congestion, which (***MODULO
> FCoE*** mr ``reboot yourself on the technology curve'') never invokes
> ethernet flow control.
> 
> HTH.
> 
> ok let me try again:
> 
> yes, I agree it would not be stupid to run iSCSI+TCP over a CoS with
> blocking storage-friendly buffer semantics if your FCoE/CEE switches
> can manage that, but I would like to hear of someone actually DOING it
> before we drag it into the discussion.  I don't think that's happening
> in the wild so far, and it's definitely not the application for which
> these products have been flogged.
> 
> I know people run iSCSI over IB (possibly with RDMA for moving the
> bulk data rather than TCP), and I know people run SCSI over FC, and of
> course SCSI (not iSCSI) over FCoE.  Remember the original assertion
> was: please try FC as well as iSCSI if you can afford it.
> 
> Are you guys really saying you believe people are running ***iSCSI***
> over the separate HOL-blocking hop-by-hop pause frame CoS's of FCoE
> meshes?  or are you just spewing a bunch of noxious white paper
> vapours at me?  because AIUI people using the
> lossless/small-output-buffer channel of FCoE are running the FC
> protocol over that ``virtual channel'' of the mesh, not iSCSI, are
> they not?

I was talking about iSCSI over TCP over IP over Ethernet. No FcOE. No IB.

-- Pasi

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to