At 08:20 AM 3/3/2008, Pawel Dziekonski wrote: >both nfs server and client have Mellanox MT25204 HCAs. tests were done >by connecting them port to port (without switch) with DDR cable. >reported link was 20Gbps.
What kernel base is your client running also? (uname -a) There are some known issues with cached write throughput over NFS above 1Gb, that we may be able to work around but it's kernel-dependent. >currently I have a Flextronix switch that reports itself as "MT47396 >Infiniscale-III Mellanox Technologies". Looking at your results from earlier this month, it's not at all clear that the nfs/rdma run was actually using nfs/rdma. The speeds and cpu loads were very similar to the ethernet results. Can we take this offline and look into it more? All on the client, I'll be interested in the exact mount command you ran, the output of "cat /proc/mounts", the contents of dmesg/kernel logs after a run, and the output of "nfsstat". Tom. _______________________________________________ general mailing list [email protected] http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
