On 2013-01-18 06:35, Thomas Nau wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a volblocksize of
4K? This seems like the most obvious improvement.
4k might be a little small. 8k will have less metadata overhead. In some cases
we've seen good performance on these workloads up through 32k. Real pain
is felt at 128k :-)
My only pain so far is the time a send/receive takes without really loading the
network at all. VM performance is nothing I worry about at all as it's pretty
good.
So key question for me is if going from 8k to 16k or even 32k would have some
benefit for
that problem?
I would guess that increasing the block size would on one hand improve
your reads - due to more userdata being stored contiguously as part of
one ZFS block - and thus sending of the backup streams should be more
about reading and sending the data and less about random seeking.
On the other hand, this may likely be paid off with the need to do more
read-modify-writes (when larger ZFS blocks are partially updated with
the smaller clusters in the VM's filesystem) while the overall system
is running and used for its primary purpose. However, since the guest
FS is likely to store files of non-minimal size, it is likely that the
whole larger backend block would be updated anyway...
So, I think, this is something an experiment can show you - whether the
gain during backup (and primary-job) reads vs. possible degradation
during the primary-job writes would be worth it.
As for the experiment, I guess you can always make a ZVOL with different
recordsize, DD data into it from the production dataset's snapshot, and
attach the VM or its clone to the newly created clone of its disk image.
Good luck, and I hope I got Richard's logic right in that answer ;)
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss