On Wed, 05 Mar 2008 at 11:13:14PM -0500, Talpey, Thomas wrote: > At 08:20 AM 3/3/2008, Pawel Dziekonski wrote: > >both nfs server and client have Mellanox MT25204 HCAs. tests were > >done by connecting them port to port (without switch) with DDR > >cable. reported link was 20Gbps. > > What kernel base is your client running also? (uname -a) There are > some known issues with cached write throughput over NFS above 1Gb, > that we may be able to work around but it's kernel-dependent. > > >currently I have a Flextronix switch that reports itself as > >"MT47396 Infiniscale-III Mellanox Technologies". > > Looking at your results from earlier this month, it's not at all > clear that the nfs/rdma run was actually using nfs/rdma. The speeds > and cpu loads were very similar to the ethernet results. > > Can we take this offline and look into it more? All on the client, > I'll be interested in the exact mount command you ran, the output of > "cat /proc/mounts", the contents of dmesg/kernel logs after a run, > and the output of "nfsstat".
Hi, thanks for answering. I can not provide any details now. I will repeat whole setup and benchmarks ASAP. Pawel -- Pawel Dziekonski <[EMAIL PROTECTED]> Wroclaw Centre for Networking & Supercomputing, HPC Department Politechnika Wr., pl. Grunwaldzki 9, bud. D2/101, 50-377 Wroclaw, POLAND phone: +48 71 3202043, fax: +48 71 3225797, http://www.wcss.wroc.pl _______________________________________________ general mailing list [email protected] http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
