At 05:30 PM 9/21/2005, Caitlin Bestler wrote:


On 9/21/05, Sean Hefty <[EMAIL PROTECTED] > wrote:
Caitlin Bestler wrote:
> That's certainly an acceptably low overhead for iWARP IHVs,
> provided there are applications that want this control and
> *not* also need even more IB-specific CM control. I still
> have the same skepticism I had for the IT-API's exposing
> of paths via a transport neutral API. Namely, is there
> really any basis to select amongst multiple paths from
> transport neutral code? The same applies to caching of
> address translations on a transport neutral basis. Is
> it really possible to do in any way that makes sense?
> Wouldn't caching at a lower layer, with transport/device
> specific knowledge, make more sense?

I guess I view this API slightly differently than being just a transport neutral
connection interface.  I also see it as a way to connect over IB using IP
addresses, which today is only possible if using ib_at.  That is, the API could
do both.


Given that purpose I can envision an IB-aware application that needed
to use IP addresses and wanted to take charge of caching the translation.

But viewing this in a wider scope raises a second question. Shouldn't
iSER be using the same routines to establish connections?

While many applications do use IP addresses, unless one goes the route of defining an IP address per path (something that iSCSI does comprehend today), IB multi-path (and I suspect eventually Ethernet's multi-path support) will require interconnect specific interfaces.  Ideally, applications / ULP define the destination and QoS requirements - what we used to call an address vector.  Middleware maps those to a interconnect-specific path on behalf of the application / ULP.  This is done underneath the API as part of the OS / RDMA infrastructure.   Such an approach works quite well for many applications / ULP however it should not be the only one supported as it assumes that the OS / RDMA infrastructure is sufficiently robust to apply policy management decisions in conjunction with the fabric management being deployed.  Given IB SM will vary in robustness, there must also exist API that allow applications / ULP to comprehend the set of paths and select accordingly.  I can envision how to construct such a knowledge that is interconnect independent but it requires more "standardization" about what defines the QoS requirements - latency, bandwidth, service rate, no single point of failure, etc. What I see so far does not address these issues.

Mike
_______________________________________________
openib-general mailing list
[email protected]
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to