Thanks for the feedback, Mssrs. Fisk and Nagle,

I think a problem for IT folks who are hearing early statements about SANs
based on GE has to do with an issue to which you both alluded.
Specifically, what parameters -- bandwidth, throughput, latency, etc. --
must designers consider when evaluating or building a storage networking
interconnect?

Put another way, when I design an aircraft, I know about lift, drag and
other engineering parameters and can plug them into calculations that will
enable me to design a wing to lift X number of pounds.  When it comes to
storage networks, I cannot get a straight answer from any vendor regarding
the parameters that must be observed or satisfied -- whether stated as
straightforward quantities/formula or general rules of thumb -- in order to
develop a working storage networking interconnect!

What factors determine the amount of bandwidth required?
What amount of latency can be tolerated?
How quickly must erred information be re-sent?
Does this all depend upon the characteristics of the storage traffic itself?
Are the parameters application dependent?

Surely, traditional SCSI bus design delivered a solution that must be
equaled or improved upon by SAN interconnects such as FC, GE or Infiniband
for the latter to be regarded as viable storage interconnects.  This can't
be rocket science:  Is there a convenient set of storage architecture design
parameters here that I am simply overlooking?

I have no axe to grind with the FC folks, but there seems to be a holy war
shaping up around FC versus GE as a storage interconnect.  I am tracking
strategies for TCP offload or ASICS speed-up and agree that the optimization
of TCP/IP functionality to support the use of GE and 10 speed GE as a SAN as
well as a LAN interconnect.  I find such a solution to be quite practical,
but I do not say that FC is inferior.  There are many roads that lead to
Rome, as the saying goes.  What is of concern to me (and to my readers) is
to avoid deploying a technology (FC, for example) that may need to be
"forklift upgraded" within a year.

Please let me know if you are aware of any storage networking design
criteria that must be addressed by any interconnect regardless of its
underlying protocol.

Thanks,

Jon Toigo
Independent Consultant and Author
The Holy Grail of Data Storage Management
[EMAIL PROTECTED]





----- Original Message -----
From: Dave Nagle <[EMAIL PROTECTED]>
To: Jon William Toigo <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, May 25, 2000 7:27 PM
Subject: Re: Storage over Ethernet/IP


> Jon,
>
> Original Message
> ----------------
>  >> I am seeking a few points of clarification:
>  >>
>  >> 1.  Fibre Channel folks have attempted to explain to me why TCP/IP
could =
>  >> NEVER be a viable interconnect for block level storage operations.
They =
>  >> claim:
>  >> a.  TCP is too CPU intensive and creates too much latency for storage
=
>  >> I/O operations.
>  >> b.  The IP stack is too top heavy and processing packet headers is too
=
>  >> slow to support storage I/O operations.
>
>   There is a lot of work to show that this is not true.  Check out Van
> Meter's 1998 ASPLOS paper "VISA - Netstations virtual internet SCSI
> adaptor."
>
>  Perhaps more importantly, there are many companies that are building
> TCP in silicon ASICs.  This should make TCP's performance comparable
> to Fibre Channel.  Both TCP/IP and FC provide about the same
> functionality ... reliable, in-order transmission.
>
> The bottom line is that FC is done in hardware while TCP has
> traditionally been done in software. Therefore, previous performance
> numbers are not going to be fair.  Once TCP is in silicon, its
> performance should be roughly equal to FC.
>
>  >> c.  The maximum throughput of a GE TCP/IP connection is 768 Mps, which
=
>  >> is too slow to support storage I/O operations.
>
>  I believe there are higher numbers (especially with Jumbo
> Frames). Alteon's web site show's 920 Mbps.  Microsoft and Duke
> University have both shown TCP performance o 1Gb+/s performance over
> other networks.
>
>   BTW, why is 768 Mbps too slow for storage.  Many apps (e.g.,
> transaction workloads) are I/O's per second bound, not bandwidth
> bound.  Also, even if storage over IP/ether is a bit slower than FC,
> the benefits of leveraging IP's infrastructure (i.e., routers,
> switches, NICs, network management, networking people) is a huge
> advantage.
>
>  There is also the issue of SCSI over TCP/IP in the SAN vs. the
> LAN/WAN.  Some companies, focusing on the SAN, are building
> SCSI/lightweight transport/IP while others, focusing on the WAN,
> propose SCSI/TCP/IP.  It may be the case that SAN and WAN traffic use
> different transport protocols to gain a bit of extra performance in
> the SAN.
>
>  >> Is any of this true?
>  >>
>  >> 2.  Adaptec has posited a replacement for TCP called STP for use as a
=
>  >> transport for storage.  Does anyone know anything about this?
>
>     From Paul von Stamwitz's posting to the ips mailing list ...
>
>       The link to the SEP draft is
>       http://www.ietf.org/internet-drafts/draft-wilson-sep-00.txt
>
>       The press release is at:
> http://www.adaptec.com/adaptec/press/release000504.html
>
>     The demo shows a Gb ethernet controller transporting SCSI traffic to
several
>     targets through an off-the-shelf 100TX switch with a Gb  uplink. The
targets
>     are ethernet to U160 SCSI bridges with one or more SCSI  drives
attached. The
>     host controller runs under NT4.0 at appears to the OS as a  SCSI host
bus
>     adapter.
>
>     The architecture is based on Adaptec's SCSI Encapsulation Protocol
>     (SEP).  SEP is mapped on top of TCP/IP or a light-weight transport
>     protocol specifically designed for SANs.
>
>     An SEP overview was presented at the IPS BOF in Adelaide last  month
and an
>     internet draft on SEP was submitted to IETF this week. I will  forward
the
>     link as soon as it becomes available. This draft is informational
>     only and intended to aid in this group's work toward an industry
>     standard SCSI transport protocol over IP networks.
>
>
>  >> 3.  Current discussions of the SCSI over IP protocol seem to ignore
the =
>  >> issue of TCP or any other transport protocol.  Does anyone know =
>  >> definitively what transport is being suggested by the IBM/Cisco crowd?
>
>    Current SCSI over IP discussions are not ignoring TCP ... they are
>    definitely considering TCP as the primary transport.  See the ips
>    web site at:
>
>      http://www.ece.cmu.edu/~ips
>
>  >>
>  >> 4.  Another storage company is looking at Reliable UDP as a substitute
=
>  >> for TCP in storage data transfers.  Where can I learn more about this
=
>  >> protocol, which I am told was introduced many years ago by Cisco?
>
>   Companies to look at include:
>
>      nishansystems.com
>      interprophet.com
>      san.com
>      arkresearch.com
>
>   Also, I believe that the IETF IP over FC working group is now
> looking at FC over IP.
>
>
>
> dave...........
>
> David Nagle
> Director, Parallel Data Lab
> Senior Reseach Computer Scientist
> School of Computer Science
> Carnegie Mellon University
> Pittsburgh, PA 15213
> 412-268-3898 (office)
> 412-268-3890 (fax)
> http://www.ece.cmu.edu/~bassoon
>

Reply via email to