On 19/02/10 22:41, Pádraig Brady wrote:
Note the linux nfs client should be doing adaptive
read ahead to abstract you from this latency issue.
Check the rsize nfs setting as that may limit the
size of read ahead done.

I just noticed the following message which
shows some of the current Linux knobs in this area:

-------- Original Message --------
Subject: [RFC] nfs: use 2*rsize readahead size
Date: Wed, 24 Feb 2010 10:41:01 +0800
From: Wu Fengguang <[email protected]>
To: Trond Myklebust <[email protected]>
CC: [email protected], [email protected],   Linux Memory Management 
List <[email protected]>,        LKML <[email protected]>

With default rsize=512k and NFS_MAX_READAHEAD=15, the current NFS
readahead size 512k*15=7680k is too large than necessary for typical
clients.

On a e1000e--e1000e connection, I got the following numbers

        readahead size          throughput
                   16k           35.5 MB/s
                   32k           54.3 MB/s
                   64k           64.1 MB/s
                  128k           70.5 MB/s
                  256k           74.6 MB/s
rsize ==>      512k           77.4 MB/s
                 1024k           85.5 MB/s
                 2048k           86.8 MB/s
                 4096k           87.9 MB/s
                 8192k           89.0 MB/s
                16384k           87.7 MB/s

So it seems that readahead_size=2*rsize (ie. keep two RPC requests in flight)
can already get near full NFS bandwidth.

The test script is:

#!/bin/sh

file=/mnt/sparse
BDI=0:15

for rasize in 16 32 64 128 256 512 1024 2048 4096 8192 16384
do
        echo 3 > /proc/sys/vm/drop_caches
        echo $rasize > /sys/devices/virtual/bdi/$BDI/read_ahead_kb
        echo readahead_size=${rasize}k
        dd if=$file of=/dev/null bs=4k count=1024000
done


Reply via email to