On Thu, 17 Jan 2008, Patrick J. LoPresti wrote:

> I need to copy large (> 100GB) files between machines on a fast
> network.  Both machines have reasonably fast disk subsystems, with
> read/write performance benchmarked at > 800 MB/sec. Using 10GigE cards
> and the usual tweaks to tcp_rmem etc., I am getting single-stream TCP
> throughput better than 600 MB/sec.
> 
> My question is how best to move the actual file.  NFS writes appear to
> max out at a little over 100 MB/sec on this configuration.

did your "usual tweaks" include mounting with -o tcp,rsize=262144,wsize=262144 ?

i should have kept better notes last time i was experimenting with this,
but from memory here's what i found:

- if i used three NFS clients and was reading from page cache on the
  server i hit 1.2GB/s total throughput from the server.  the client
  NFS code was maxing out one CPU on each of the client machines.

- disk subsystem (sw raid10 far2) was capable of 600MB/s+ when read
  locally on the NFS server, but topped out around ~250MB/s when read
  remotely (no matter how many clients).

my workload was read-intensive so i didn't experiment with writes...

-dean
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to