Sounds good. Is someone willing to take on this talk? User-driven talks on real experiences are always welcome.
Cheers, Kristy > On Jul 11, 2017, at 7:46 AM, Bryan Banister <[email protected]> wrote: > > Sounds like a very interesting topic for an upcoming GPFS UG meeting… say > SC’17? > -B > > From: [email protected] > <mailto:[email protected]> > [mailto:[email protected] > <mailto:[email protected]>] On Behalf Of Andrew Beattie > Sent: Tuesday, July 11, 2017 5:15 AM > To: [email protected] <mailto:[email protected]> > Cc: [email protected] > <mailto:[email protected]>; [email protected] > <mailto:[email protected]> > Subject: Re: [gpfsug-discuss] does AFM support NFS via RDMA > > Bilich, > > Reach out to Jake Carrol at Uni of QLD > > UQ have been playing with NFS over 10GB / 40GB and 100GB Ethernet > and there is LOTS of tuning that you can do to improve how things work > > Regards, > Andrew Beattie > Software Defined Storage - IT Specialist > Phone: 614-2133-7927 > E-mail: [email protected] <mailto:[email protected]> > > > ----- Original message ----- > From: "Billich Heinrich Rainer (PSI)" <[email protected] > <mailto:[email protected]>> > Sent by: [email protected] > <mailto:[email protected]> > To: "[email protected] > <mailto:[email protected]>" <[email protected] > <mailto:[email protected]>> > Cc: > Subject: [gpfsug-discuss] does AFM support NFS via RDMA > Date: Tue, Jul 11, 2017 7:36 PM > > Hello, > > We run AFM using NFS as transport between home and cache. Using > IP-over-Infiniband we see a throughput between 1 and 2 GB/s. This is not bad > but far from what a native IB link provides – 6GB/s . Does AFM’s nfs client > on gateway nodes support NFS using RDMA? I would like to try. Or should we > try to tune nfs and the IP stack – I wonder if anybody got throughput above 2 > GB/s using IPoIB and FDR between two nodes? > > We can’t use a native gpfs multicluster mount – this links home and cache > much too strong: If home fails cache will unmount the cache fileset – this is > what I get from the manuals. > > We run spectrum scale 4.2.2/4.2.3 on Redhat 7. > > Thank you, > > Heiner Billich > > -- > Paul Scherrer Institut > Heiner Billich > WHGA 106 > CH 5232 Villigen > 056 310 36 02 > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > <http://gpfsug.org/mailman/listinfo/gpfsug-discuss> > > > > > Note: This email is for the confidential use of the named addressee(s) only > and may contain proprietary, confidential or privileged information. If you > are not the intended recipient, you are hereby notified that any review, > dissemination or copying of this email is strictly prohibited, and to please > notify the sender immediately and destroy this email and any attachments. > Email transmission cannot be guaranteed to be secure or error-free. The > Company, therefore, does not make any guarantees as to the completeness or > accuracy of this email or any attachments. This email is for informational > purposes only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform any type > of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org <http://spectrumscale.org/> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
