Chris,

The data will be written twice on ZFS using NFS. This is because NFS
on closing the file internally uses fsync to cause the writes to be
committed. This causes the ZIL to immediately write the data to the intent log.
Later the data is also written committed as part of the pools transaction group
commit, at which point the intent block blocks are freed.

It does seem inefficient to doubly write the data. In fact for blocks
larger than zfs_immediate_write_sz (was 64K but now 32K after 6440499 fixed)
we write the data block and also an intent log record with the block pointer.
During txg commit we link this block into the pool tree. By experimentation
we found 32K to be the (current) cutoff point. As the nfsd at most write 32K
they do not benefit from this.

Anyway this is an area we are actively working on.

Neil.

Chris Csanady wrote On 06/23/06 23:45,:
While dd'ing to an nfs filesystem, half of the bandwidth is unaccounted
for.  What dd reports amounts to almost exactly half of what zpool iostat
or iostat show; even after accounting for the overhead of the two mirrored
vdevs.  Would anyone care to guess where it may be going?

(This is measured over 10 second intervals.  For 1 second intervals,
the bandwidth to the disks jumps around from <40MB/s to >240MB/s)

With a local dd, everything adds up.  This is with a b41 server, and a
MacOS 10.4 nfs client.  I have verified that the bandwidth at the network
interface is approximately that reported by dd, so the issue would appear
to be within the server.

Any suggestions would be welcome.

Chris
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--

Neil
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to