On 2017-12-05 10:20, Dustin Wenz wrote:
> Thanks for linking that resource. The purpose of my posting was to increase 
> the body of knowledge available to people who are running bhyve on zfs. It's 
> a versatile way to deploy guests, but I haven't seen much practical advise 
> about doing it efficiently. 
> Allan's explanation yesterday of how allocations are padded is exactly the 
> sort of breakdown I could have used when I first started provisioning VMs. 
> I'm sure other people will find this conversation useful as well.
>       - .Dustin

This subject is covered in detail in chapter 9 (Tuning) of "FreeBSD
Mastery: Advanced ZFS", available from http://www.zfsbook.com/ or any
finer book store.

>> On Dec 4, 2017, at 9:37 PM, Adam Vande More <amvandem...@gmail.com> wrote:
>> On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz <dustinw...@ebureau.com> wrote:
>> I'm starting a new thread based on the previous discussion in "bhyve uses 
>> all available memory during IO-intensive operations" relating to size 
>> inflation of bhyve data stored on zvols. I've done some experimenting with 
>> this, and I think it will be useful for others.
>> The zvols listed here were created with this command:
>>         zfs create -o volmode=dev -o volblocksize=Xk -V 30g 
>> vm00/chyves/guests/myguest/diskY
>> The zvols were created on a raidz1 pool of four disks. For each zvol, I 
>> created a basic zfs filesystem in the guest using all default tuning (128k 
>> recordsize, etc). I then copied the same 8.2GB dataset to each filesystem.
>>         volblocksize    size amplification
>>         512B            11.7x
>>         4k                      1.45x
>>         8k                      1.45x
>>         16k                     1.5x
>>         32k                     1.65x
>>         64k                     1x
>>         128k            1x
>> The worst case is with a 512B volblocksize, where the space used is more 
>> than 11 times the size of the data stored within the guest. The size 
>> efficiency gains are non-linear as I continue from 4k and double the block 
>> sizes; 32k blocks being the second-worst. The amount of wasted space was 
>> minimized by using 64k and 128k blocks.
>> It would appear that 64k is a good choice for volblocksize if you are using 
>> a zvol to back your VM, and the VM is using the virtual device for a zpool. 
>> Incidentally, I believe this is the default when creating VMs in FreeNAS.
>> I'm not sure what your purpose is behind the posting, but if its simply a 
>> "why this behavior" you can find more detail here as well as some 
>> calculation leg work:
>> https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz
>> -- 
>> Adam

Allan Jude
freebsd-virtualization@freebsd.org mailing list
To unsubscribe, send any mail to 

Reply via email to