On 02/05/2013 05:04 PM, Sašo Kiselkov wrote:
> On 01/31/2013 11:16 PM, Albert Shih wrote:
>> Hi all,
>> I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
>> (I known it's bad).
>> Well I've server running FreeBSD 9.0 with (don't count / on differents
>> disks) zfs pool with 36 disk.
>> The performance is very very good on the server.
>> I've one NFS client running FreeBSD 8.3 and the performance over NFS is
>> very good :
>> For example : Read from the client and write over NFS to ZFS:
>> [root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar
>> real 1m7.244s
>> user 0m0.921s
>> sys 0m8.990s
>> this client is on 1Gbits/s network cable and same network switch as the
>> I've a second NFS client running FreeBSD 9.1-Stable, and on this second
>> client the performance is catastrophic. After 1 hour the tar isn't finish.
>> OK this second client is connect with 100Mbit/s and not on the same switch.
>> But well from 2 min --> ~ 90 min ...:-(
>> I've try for this second client to change on the ZFS-NFS server the
>> zfs set sync=disabled
>> and that change nothing.
>> On a third NFS client linux (recent Ubuntu) I got the almost same
>> performance. With or without sync=disabled.
>> Those three NFS client use TCP.
>> If I do a classic scp I got normal speed ~9-10 Mbytes/s so the network is
>> not the problem.
>> I try to something like (find with google):
>> net.inet.tcp.sendbuf_max: 2097152 -> 16777216
>> net.inet.tcp.recvbuf_max: 2097152 -> 16777216
>> net.inet.tcp.sendspace: 32768 -> 262144
>> net.inet.tcp.recvspace: 65536 -> 262144
>> net.inet.tcp.mssdflt: 536 -> 1452
>> net.inet.udp.recvspace: 42080 -> 65535
>> net.inet.udp.maxdgram: 9216 -> 65535
>> net.local.stream.recvspace: 8192 -> 65535
>> net.local.stream.sendspace: 8192 -> 65535
>> and that change nothing either.
>> Anyone have any idea ?
> What you describe sounds like a bad networking issue. Check your network
> via the usual tools like ping, mtr, netperf, etc. Verify cabling and
> interface counters on your machines too, for stuff like CRC errors or
> jabbers - a few of those and the throughput of a TCP link goes down the
Just one more thing: simply doing SCP need not show a problem. SCP is
very uni-directional, and you may be hitting an issue in the opposite
direction (I've seen TP cables where one pair was fine and the other was
giving bad data).
Also check for dropped packets on your source and target machines via
tools like DTrace.
zfs-discuss mailing list