[EMAIL PROTECTED] wrote on Fri, 16 Jun 2006 10:19 +0200:
> What is the status, the quality, of the Infiniband
> interface in PVFS2?
> 
> In a few (3-4) months I hope to have 24 nodes with Infiniband, 4x DDR.
> I now have 8 nodes with Myrinet, the 2Gb/s version and I have tested
> the bandwidth and latency.  When presenting my results I would like
> to mention what we might expect for Infiniband.  I have not seen messages on
> the Infiniband interface, so I don't know whether it is already very
> mature or that there is just little interest.  By the way, what is
> particularly important for my application is low latency -- if anyone
> can comment on that aspect in particular.

Pvfs2/ib in pretty good shape.  There's no recent papers (yet)
published that will give you performance numbers to report, though
one has been submitted to a conference.

We recently ported the pvfs2 InfiniBand interface to support both
VAPI (i.e. Mellanox IBGD) and OpenFabrics (aka OpenIB) programming
interfaces.  They share a common code base, except for specific
functions, so maintaining both should not be any more difficult than
maintaining one.  Supporting OpenIB also means we support iWarp
RDMA-enabled Ethernet devices for free.  This new code will appear
in the next release, which is somewhat imminent.

We have one outstanding issue with thread starvation when using a
linux 2.4 kernel, but this will be addressed soon by using,
optionally, event-driven network operations rather than the current
polling scheme.

As far as latency goes, don't expect IB to improve things much over
Ethernet.  Most of the overhead for, say, a file stat operation or
directory read comes from the client and server processing, not
from network latencies.

                -- Pete

_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to