On Wed, 24 Mar 2010, Dan Naumov wrote:
Has anyone done any extensive testing of the effects of tuning
vfs.zfs.vdev.max_pending on this issue? Is there some universally
recommended value beyond the default 35? Anything else I should be
looking at?

The vdev.max_pending value is primarily used to tune for SAN/HW-RAID LUNs and is used to dial down LUN service time (svc_t) values by limiting the number of pending requests. It is not terribly useful for decreasing stalls due to zfs writes. In order to reduce the impact of zfs writes, you want to limit the maximum size of a zfs transaction group (TXG). I don't know what the FreeBSD tunable is for this, but under Solaris it is zfs:zfs_write_limit_override.

On a large-memory system, a properly working zfs should not saturate the write channel for more than 5 seconds. Zfs tries to learn the write bandwidth so that it can tune the TXG size up to 5 seconds (max) worth of writes. If you have both large memory and fast storage, quite a huge amount of data can be written in 5 seconds. On my Solaris system, I found that zfs was quite accurate with its rate estimation, but it resulted in four gigabytes of data being written per TXG.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

Reply via email to