Brandon High wrote:
On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty <j...@netmusician.org> wrote:
things. I've also read this on a VMWare forum, although I don't know if this correct? This is in context to me questioning why I don't seem to have these same load average problems running Virtualbox:

The problem with the comparison VirtualBox comparison is that caching is known to be broken in VirtualBox (ignores cache flush, which, by continuing to cache, can "speed up" IO at the expense of data integrity or loss). This could be playing in your favor from a performance perspective, but puts your data at risk. Disabling disk caching altogether would be a bit hit on the Virtualbox side... Neither solution is ideal.

Check the link that I posted earlier, under "Responding to guest IDE/SATA flush requests". Setting IgnoreFlush to 0 will turn off the extra caching.
 
Cool, so maybe this guy was going off of earlier information? Was there a time when there was no way to enable cache flushing in Virtualbox?


I've actually never seen much, if any iowait (%w in iostat output, right?). I've run the zilstat script and am happy to share that output with you if you wouldn't mind taking a look at it? I'm not sure I'm understanding its output correctly...

You'll see iowait on the VM, not on the zfs server.
 
My mistake, yes I see pretty significant iowait times on the host... Right now "iostat" is showing 9.30% wait times.


 
Will this tuning have an impact on my existing VMDK files? Can you kindly tell me more about this, how I can observe my current recordsize and play around with this setting if it will help? Will adjusting ZFS compression on my share hosting my VMDKs be of any help too? Compression is disabled on my ZFS share where my VMDKs are hosted.

No, your existing files will keep whatever recordsize they were created with. You can view or change the recordsize property the same as any other zfs property. You'll have to recreate the files to re-write them with a different recordsize. (eg: copy file.vmdk file.vmdk.foo ;  if $?; then mv file.vmdk.foo file.vmdk; fi)
 
This ZFS host hosts regular data shares in addition to the VMDKs. All user data on my VM guests that is subject to change is hosted on a ZFS share, only the OS and basic OS applications are saved to my VMDKs.

The property is per dataset. If the vmdk files are in separate datasets (which I recommend) you can adjust the properties or take snapshots of each VM's data separately.
 


Ahhh! Yes, my VMDKs are on a separate dataset, and recordsizes are set to 128k:

# zfs get recordsize nm/myshare
NAME       PROPERTY    VALUE    SOURCE
nm/myshare  recordsize  128K     default

Do you have a recommendation for a good size to start with for the dataset hosting VMDKs? Half of 128K? A third?


In general large files are better served with smaller recordsizes, whereas small files are better served with the 128k default?




--
Joe Auty, NetMusician
NetMusician helps musicians, bands and artists create beautiful, professional, custom designed, career-essential websites that are easy to maintain and to integrate with popular social networks.
www.netmusician.org
j...@netmusician.org

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to