Re: NFS alternatives (was: Re: Storage overhead on zvols)

2017-12-06 Thread Rodney W. Grimes
> Hi all,
> 
> > Am 05.12.2017 um 17:41 schrieb Rodney W. Grimes 
> > :
> > In effect what your asking for is what NFS does, so use NFS and get
> > over the fact that this is the way to get what you want.  Sure you
> > could implement a virt-vfs but I wonder how close the spec of that
> > would be to the spec of NFS.
> 
> I figure it should be possible to implement something simpler
> than NFS that provides full local posix semantics under the
> constraint that only one "client" is allowed to mount the FS
> at a time.
> 
> I see quite a few applications for something like this, specifically
> in "hyperconvergent" environments. Or vagrant, of course.
> 
> *scratching head* isn't this what Sun's "network disk" protocol provided?

nd provided a 512b block device, no file system symatics at all,
I believe it did allow 1 writer N readers though.

Today you would use iSCSI in place of nd.

-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: NFS alternatives (was: Re: Storage overhead on zvols)

2017-12-06 Thread Adam Vande More
On Wed, Dec 6, 2017 at 2:45 AM, Patrick M. Hausen  wrote:

> Hi all,
>
> > Am 05.12.2017 um 17:41 schrieb Rodney W. Grimes <
> freebsd-...@pdx.rh.cn85.dnsmgr.net>:
> > In effect what your asking for is what NFS does, so use NFS and get
> > over the fact that this is the way to get what you want.  Sure you
> > could implement a virt-vfs but I wonder how close the spec of that
> > would be to the spec of NFS.
>
> I figure it should be possible to implement something simpler
> than NFS that provides full local posix semantics under the
> constraint that only one "client" is allowed to mount the FS
> at a time.
>
> I see quite a few applications for something like this, specifically
> in "hyperconvergent" environments. Or vagrant, of course.
>
> *scratching head* isn't this what Sun's "network disk" protocol provided?
>
> Kind regards,
> Patrick


Like this?

https://www.freebsd.org/doc/handbook/geom-ggate.html

-- 
Adam
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: NFS alternatives (was: Re: Storage overhead on zvols)

2017-12-06 Thread P Vix


On December 6, 2017 5:45:47 PM GMT+09:00, "Patrick M. Hausen"  
wrote:
>Hi all,
>
>I see quite a few applications for something like this, specifically
>in "hyperconvergent" environments. Or vagrant, of course.

+1.

>*scratching head* isn't this what Sun's "network disk" protocol
>provided?

No. Nd was like iscsi.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


NFS alternatives (was: Re: Storage overhead on zvols)

2017-12-06 Thread Patrick M. Hausen
Hi all,

> Am 05.12.2017 um 17:41 schrieb Rodney W. Grimes 
> :
> In effect what your asking for is what NFS does, so use NFS and get
> over the fact that this is the way to get what you want.  Sure you
> could implement a virt-vfs but I wonder how close the spec of that
> would be to the spec of NFS.

I figure it should be possible to implement something simpler
than NFS that provides full local posix semantics under the
constraint that only one "client" is allowed to mount the FS
at a time.

I see quite a few applications for something like this, specifically
in "hyperconvergent" environments. Or vagrant, of course.

*scratching head* isn't this what Sun's "network disk" protocol provided?

Kind regards,
Patrick
-- 
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-05 Thread Dustin Wenz

> On Dec 5, 2017, at 10:41 AM, Rodney W. Grimes 
>  wrote:
> 
>> 
>> 
>> Dustin Wenz wrote:
>>> I'm not using ZFS in my VMs for data integrity (the host already
>>> provides that); it's mainly for the easy creation and management of
>>> filesystems, and the ability to do snapshots for rollback and
>>> replication.
>> 
>> snapshot and replication works fine on the host, acting on the zvol.
> 
> I suspect he is snapshotting and doing send/recvs of something
> much less than the zvol, probably some datasetbs, maybe boot
> envorinments, a snapshot of the whole zvol is ok if your managing
> data at the VM level, not so good if you got lots of stuff going
> on inside the VM.

Exactly, it's useful to have control of each filesystem discretely.

>>> Some of my deployments have hundreds of filesystems in
>>> an organized hierarchy, with delegated permissions and automated
>>> snapshots, send/recvs, and clones for various operations.
>> 
>> what kind of zpool do you use in the guest, to avoid unwanted additional 
>> redundancy?
> 
> Just a simple stripe of 1 device would be my guess, though your
> still gona have metadata redundancy.

Also correct; just using the zvol virtual device as a single-disk pool.


>> 
>> did you benchmark the space or time efficiency of ZFS vs. UFS?
>> 
>> in some bsd related meeting this year i asked allan jude for a bhyve 
>> level null mount, so that we could access at / inside the guest some 
>> subtree of the host, and avoid block devices and file systems 
>> altogether. right now i have to use nfs for that, which is irritating.
> 
> This is not as simple as it seems, remember bhyve is just presenting
> a hardware environment, hardware environments dont have a file system
> concept per se, unlike jails which are providing a software environment.
> 
> In effect what your asking for is what NFS does, so use NFS and get
> over the fact that this is the way to get what you want.  Sure you
> could implement a virt-vfs but I wonder how close the spec of that
> would be to the spec of NFS.
> 
> Or maybe thats the answer, implement virt-vfs as a more effecient way
> to transport nfs calls in and out of the guest.


I've not done any deliberate comparisons for latency or throughput. What I've 
decided to virtualize does not have any exceptional performance requirements. 
If I need the best possible IO, I would lean toward using jails instead of a 
hypervisor.

- .Dustin




smime.p7s
Description: S/MIME cryptographic signature


Re: Storage overhead on zvols

2017-12-05 Thread Rodney W. Grimes
> 
> 
> Dustin Wenz wrote:
> > I'm not using ZFS in my VMs for data integrity (the host already
> > provides that); it's mainly for the easy creation and management of
> > filesystems, and the ability to do snapshots for rollback and
> > replication.
> 
> snapshot and replication works fine on the host, acting on the zvol.

I suspect he is snapshotting and doing send/recvs of something
much less than the zvol, probably some datasetbs, maybe boot
envorinments, a snapshot of the whole zvol is ok if your managing
data at the VM level, not so good if you got lots of stuff going
on inside the VM.

> > Some of my deployments have hundreds of filesystems in
> > an organized hierarchy, with delegated permissions and automated
> > snapshots, send/recvs, and clones for various operations.
> 
> what kind of zpool do you use in the guest, to avoid unwanted additional 
> redundancy?

Just a simple stripe of 1 device would be my guess, though your
still gona have metadata redundancy.

> 
> did you benchmark the space or time efficiency of ZFS vs. UFS?
> 
> in some bsd related meeting this year i asked allan jude for a bhyve 
> level null mount, so that we could access at / inside the guest some 
> subtree of the host, and avoid block devices and file systems 
> altogether. right now i have to use nfs for that, which is irritating.

This is not as simple as it seems, remember bhyve is just presenting
a hardware environment, hardware environments dont have a file system
concept per se, unlike jails which are providing a software environment.

In effect what your asking for is what NFS does, so use NFS and get
over the fact that this is the way to get what you want.  Sure you
could implement a virt-vfs but I wonder how close the spec of that
would be to the spec of NFS.

Or maybe thats the answer, implement virt-vfs as a more effecient way
to transport nfs calls in and out of the guest.

-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-05 Thread Allan Jude
On 2017-12-05 10:20, Dustin Wenz wrote:
> Thanks for linking that resource. The purpose of my posting was to increase 
> the body of knowledge available to people who are running bhyve on zfs. It's 
> a versatile way to deploy guests, but I haven't seen much practical advise 
> about doing it efficiently. 
> 
> Allan's explanation yesterday of how allocations are padded is exactly the 
> sort of breakdown I could have used when I first started provisioning VMs. 
> I'm sure other people will find this conversation useful as well.
> 
>   - .Dustin
> 

This subject is covered in detail in chapter 9 (Tuning) of "FreeBSD
Mastery: Advanced ZFS", available from http://www.zfsbook.com/ or any
finer book store.

>> On Dec 4, 2017, at 9:37 PM, Adam Vande More  wrote:
>>
>> On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz  wrote:
>> I'm starting a new thread based on the previous discussion in "bhyve uses 
>> all available memory during IO-intensive operations" relating to size 
>> inflation of bhyve data stored on zvols. I've done some experimenting with 
>> this, and I think it will be useful for others.
>>
>> The zvols listed here were created with this command:
>>
>> zfs create -o volmode=dev -o volblocksize=Xk -V 30g 
>> vm00/chyves/guests/myguest/diskY
>>
>> The zvols were created on a raidz1 pool of four disks. For each zvol, I 
>> created a basic zfs filesystem in the guest using all default tuning (128k 
>> recordsize, etc). I then copied the same 8.2GB dataset to each filesystem.
>>
>> volblocksizesize amplification
>>
>> 512B11.7x
>> 4k  1.45x
>> 8k  1.45x
>> 16k 1.5x
>> 32k 1.65x
>> 64k 1x
>> 128k1x
>>
>> The worst case is with a 512B volblocksize, where the space used is more 
>> than 11 times the size of the data stored within the guest. The size 
>> efficiency gains are non-linear as I continue from 4k and double the block 
>> sizes; 32k blocks being the second-worst. The amount of wasted space was 
>> minimized by using 64k and 128k blocks.
>>
>> It would appear that 64k is a good choice for volblocksize if you are using 
>> a zvol to back your VM, and the VM is using the virtual device for a zpool. 
>> Incidentally, I believe this is the default when creating VMs in FreeNAS.
>>
>> I'm not sure what your purpose is behind the posting, but if its simply a 
>> "why this behavior" you can find more detail here as well as some 
>> calculation leg work:
>>
>> https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz
>>
>> -- 
>> Adam
> 


-- 
Allan Jude
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-05 Thread Paul Vixie



Patrick M. Hausen wrote:

I'm not an FS developer but from experience as an admin that
feature - nullfs mounts into a hypervisor - while greatly desired,
looks quite nontrivial to implement.


i think what's called for is a vdd of some kind, similar to the virtual 
ethernet and virtual disk drivers. yes, it would appear in the guest at 
the vfs layer. i'm surprised that the qemu community doesn't already 
have it.


this is something virtualbox gets wrong, by the way. it offers something 
that sounds like what i want, but then implements it as SMB.


don't get be wrong -- UFS and NFS work for me, and i love bhyve as-is.

--
P Vixie

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-05 Thread Patrick M. Hausen
Hi all,

> Am 05.12.2017 um 16:41 schrieb Paul Vixie :
> in some bsd related meeting this year i asked allan jude for a bhyve level 
> null mount,
> so that we could access at / inside the guest some subtree of the host, and 
> avoid block
> devices and file systems altogether. right now i have to use nfs for that, 
> which is irritating.

I'm not an FS developer but from experience as an admin that
feature - nullfs mounts into a hypervisor - while greatly desired,
looks quite nontrivial to implement.

Jordan went to 9Pfs for the now discontinued FreeNAS Corral
at iX. If it was easy to do at the VFS layer, I doubt they would have
gone that way.

Kind regards,
Patrick
-- 
punkt.de GmbH   Internet - Dienstleistungen - Beratung
Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100
76133 Karlsruhe i...@punkt.de   http://punkt.de
AG Mannheim 108285  Gf: Juergen Egeling

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-05 Thread Paul Vixie



Dustin Wenz wrote:

I'm not using ZFS in my VMs for data integrity (the host already
provides that); it's mainly for the easy creation and management of
filesystems, and the ability to do snapshots for rollback and
replication.


snapshot and replication works fine on the host, acting on the zvol.


Some of my deployments have hundreds of filesystems in
an organized hierarchy, with delegated permissions and automated
snapshots, send/recvs, and clones for various operations.


what kind of zpool do you use in the guest, to avoid unwanted additional 
redundancy?


did you benchmark the space or time efficiency of ZFS vs. UFS?

in some bsd related meeting this year i asked allan jude for a bhyve 
level null mount, so that we could access at / inside the guest some 
subtree of the host, and avoid block devices and file systems 
altogether. right now i have to use nfs for that, which is irritating.


--
P Vixie

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-05 Thread Rodney W. Grimes
> I'm not using ZFS in my VMs for data integrity (the host already provides 
> that); it's mainly for the easy creation and management of filesystems, and 
> the ability to do snapshots for rollback and replication. Some of my 
> deployments have hundreds of filesystems in an organized hierarchy, with 
> delegated permissions and automated snapshots, send/recvs, and clones for 
> various operations.

I architect things in such a way that I have 1 VM used as a NAS that
runs zfs and allows all that nice functionality then all the other
VM's run with ufs + nfs mounts.  And can actually run a VM with
no local disk over iPxe netbooting.

I find this very flexible and minimally impacting.  Though it
does make a single point of failure, that could be cured with
some redundancy.

-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-05 Thread Dustin Wenz
I'm not using ZFS in my VMs for data integrity (the host already provides 
that); it's mainly for the easy creation and management of filesystems, and the 
ability to do snapshots for rollback and replication. Some of my deployments 
have hundreds of filesystems in an organized hierarchy, with delegated 
permissions and automated snapshots, send/recvs, and clones for various 
operations.

- .Dustin

> On Dec 5, 2017, at 9:22 AM, Paul Vixie  wrote:
> 
> the surprising fact that came up in recent threads is that some of you run 
> zfs in your guests. that's quite a bit of unnec'y redundancy and other 
> overheads. i am using UFS in my guests.
> ___
> freebsd-virtualization@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
> To unsubscribe, send any mail to 
> "freebsd-virtualization-unsubscr...@freebsd.org"



smime.p7s
Description: S/MIME cryptographic signature


Re: Storage overhead on zvols

2017-12-05 Thread Paul Vixie
the surprising fact that came up in recent threads is that some of you 
run zfs in your guests. that's quite a bit of unnec'y redundancy and 
other overheads. i am using UFS in my guests.

___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-05 Thread Dustin Wenz
Thanks for linking that resource. The purpose of my posting was to increase the 
body of knowledge available to people who are running bhyve on zfs. It's a 
versatile way to deploy guests, but I haven't seen much practical advise about 
doing it efficiently. 

Allan's explanation yesterday of how allocations are padded is exactly the sort 
of breakdown I could have used when I first started provisioning VMs. I'm sure 
other people will find this conversation useful as well.

- .Dustin

> On Dec 4, 2017, at 9:37 PM, Adam Vande More  wrote:
> 
> On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz  wrote:
> I'm starting a new thread based on the previous discussion in "bhyve uses all 
> available memory during IO-intensive operations" relating to size inflation 
> of bhyve data stored on zvols. I've done some experimenting with this, and I 
> think it will be useful for others.
> 
> The zvols listed here were created with this command:
> 
> zfs create -o volmode=dev -o volblocksize=Xk -V 30g 
> vm00/chyves/guests/myguest/diskY
> 
> The zvols were created on a raidz1 pool of four disks. For each zvol, I 
> created a basic zfs filesystem in the guest using all default tuning (128k 
> recordsize, etc). I then copied the same 8.2GB dataset to each filesystem.
> 
> volblocksizesize amplification
> 
> 512B11.7x
> 4k  1.45x
> 8k  1.45x
> 16k 1.5x
> 32k 1.65x
> 64k 1x
> 128k1x
> 
> The worst case is with a 512B volblocksize, where the space used is more than 
> 11 times the size of the data stored within the guest. The size efficiency 
> gains are non-linear as I continue from 4k and double the block sizes; 32k 
> blocks being the second-worst. The amount of wasted space was minimized by 
> using 64k and 128k blocks.
> 
> It would appear that 64k is a good choice for volblocksize if you are using a 
> zvol to back your VM, and the VM is using the virtual device for a zpool. 
> Incidentally, I believe this is the default when creating VMs in FreeNAS.
> 
> I'm not sure what your purpose is behind the posting, but if its simply a 
> "why this behavior" you can find more detail here as well as some calculation 
> leg work:
> 
> https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz
> 
> -- 
> Adam



smime.p7s
Description: S/MIME cryptographic signature


Re: Storage overhead on zvols

2017-12-04 Thread Adam Vande More
On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz  wrote:

> I'm starting a new thread based on the previous discussion in "bhyve uses
> all available memory during IO-intensive operations" relating to size
> inflation of bhyve data stored on zvols. I've done some experimenting with
> this, and I think it will be useful for others.
>
> The zvols listed here were created with this command:
>
> zfs create -o volmode=dev -o volblocksize=Xk -V 30g
> vm00/chyves/guests/myguest/diskY
>
> The zvols were created on a raidz1 pool of four disks. For each zvol, I
> created a basic zfs filesystem in the guest using all default tuning (128k
> recordsize, etc). I then copied the same 8.2GB dataset to each filesystem.
>
> volblocksizesize amplification
>
> 512B11.7x
> 4k  1.45x
> 8k  1.45x
> 16k 1.5x
> 32k 1.65x
> 64k 1x
> 128k1x
>
> The worst case is with a 512B volblocksize, where the space used is more
> than 11 times the size of the data stored within the guest. The size
> efficiency gains are non-linear as I continue from 4k and double the block
> sizes; 32k blocks being the second-worst. The amount of wasted space was
> minimized by using 64k and 128k blocks.
>
> It would appear that 64k is a good choice for volblocksize if you are
> using a zvol to back your VM, and the VM is using the virtual device for a
> zpool. Incidentally, I believe this is the default when creating VMs in
> FreeNAS.
>

I'm not sure what your purpose is behind the posting, but if its simply a
"why this behavior" you can find more detail here as well as some
calculation leg work:

https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz

-- 
Adam
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-04 Thread Allan Jude
On 12/04/2017 18:19, Dustin Wenz wrote:
> I'm starting a new thread based on the previous discussion in "bhyve uses all 
> available memory during IO-intensive operations" relating to size inflation 
> of bhyve data stored on zvols. I've done some experimenting with this, and I 
> think it will be useful for others.
> 
> The zvols listed here were created with this command:
> 
>   zfs create -o volmode=dev -o volblocksize=Xk -V 30g 
> vm00/chyves/guests/myguest/diskY
> 
> The zvols were created on a raidz1 pool of four disks. For each zvol, I 
> created a basic zfs filesystem in the guest using all default tuning (128k 
> recordsize, etc). I then copied the same 8.2GB dataset to each filesystem.
> 
>   volblocksizesize amplification
> 
>   512B11.7x
>   4k  1.45x
>   8k  1.45x
>   16k 1.5x
>   32k 1.65x
>   64k 1x
>   128k1x
> 
> The worst case is with a 512B volblocksize, where the space used is more than 
> 11 times the size of the data stored within the guest. The size efficiency 
> gains are non-linear as I continue from 4k and double the block sizes; 32k 
> blocks being the second-worst. The amount of wasted space was minimized by 
> using 64k and 128k blocks.
> 
> It would appear that 64k is a good choice for volblocksize if you are using a 
> zvol to back your VM, and the VM is using the virtual device for a zpool. 
> Incidentally, I believe this is the default when creating VMs in FreeNAS.
> 
>   - .Dustin
> 

As I explained a bit in the other thread, this depends a lot on your
VDEV configuration.

Allocations on RAID-Z* must be padded out to a multiple of 1+p (where p
is the parity level)

So on RAID-Z1, all allocations must be divisible by 2.

Of course any record size less than 4k, on drives with 4k sectors would
be rounded up as well.

So, with recordsize=512, you would end up using: 4k for data, 4k for
parity, with a waste factor of almost 16x.

4k is a bit better
Z1: 1 data + 1 parity + 0 padding = 2x
Z2: 1 data + 2 parity + 0 padding = 3x
Z3: 1 data + 3 parity + 0 padding = 4x

8k can be worse, where the RAID-Z padding comes into play:
Z1: 2 data + 1 parity + 1 padding = 2x (expect 1.5x)
Z2: 2 data + 2 parity + 2 padding = 3x (expect 2x)
Z3: 2 data + 3 parity + 3 padding = 4x (expect 2.x5)

Finally, all of these nice even numbers can be be thrown out once you
enable compression, and some blocks will compress better than others. An
8k record that fits into one 4k sector, etc.

Also consider that 'zfs' commands show size after its calculations of
what the expected raid-z parity space consumption will be, but does not
consider losses to padding. Whereas numbers given by the 'zpool'
command, are raw actual storage.

-- 
Allan Jude
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Re: Storage overhead on zvols

2017-12-04 Thread Dustin Marquess
I doubt it's best practice, and I'm sure I'm just crazy for doing it,
but personally I try and match the ZVOL blocksize to whatever the
underlaying filesystem's blocksize is.  To me that just makes the most
logical sense.

-Dustin

On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenz  wrote:
> I'm starting a new thread based on the previous discussion in "bhyve uses all 
> available memory during IO-intensive operations" relating to size inflation 
> of bhyve data stored on zvols. I've done some experimenting with this, and I 
> think it will be useful for others.
>
> The zvols listed here were created with this command:
>
> zfs create -o volmode=dev -o volblocksize=Xk -V 30g 
> vm00/chyves/guests/myguest/diskY
>
> The zvols were created on a raidz1 pool of four disks. For each zvol, I 
> created a basic zfs filesystem in the guest using all default tuning (128k 
> recordsize, etc). I then copied the same 8.2GB dataset to each filesystem.
>
> volblocksizesize amplification
>
> 512B11.7x
> 4k  1.45x
> 8k  1.45x
> 16k 1.5x
> 32k 1.65x
> 64k 1x
> 128k1x
>
> The worst case is with a 512B volblocksize, where the space used is more than 
> 11 times the size of the data stored within the guest. The size efficiency 
> gains are non-linear as I continue from 4k and double the block sizes; 32k 
> blocks being the second-worst. The amount of wasted space was minimized by 
> using 64k and 128k blocks.
>
> It would appear that 64k is a good choice for volblocksize if you are using a 
> zvol to back your VM, and the VM is using the virtual device for a zpool. 
> Incidentally, I believe this is the default when creating VMs in FreeNAS.
>
> - .Dustin
>
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


Storage overhead on zvols

2017-12-04 Thread Dustin Wenz
I'm starting a new thread based on the previous discussion in "bhyve uses all 
available memory during IO-intensive operations" relating to size inflation of 
bhyve data stored on zvols. I've done some experimenting with this, and I think 
it will be useful for others.

The zvols listed here were created with this command:

zfs create -o volmode=dev -o volblocksize=Xk -V 30g 
vm00/chyves/guests/myguest/diskY

The zvols were created on a raidz1 pool of four disks. For each zvol, I created 
a basic zfs filesystem in the guest using all default tuning (128k recordsize, 
etc). I then copied the same 8.2GB dataset to each filesystem.

volblocksizesize amplification

512B11.7x
4k  1.45x
8k  1.45x
16k 1.5x
32k 1.65x
64k 1x
128k1x

The worst case is with a 512B volblocksize, where the space used is more than 
11 times the size of the data stored within the guest. The size efficiency 
gains are non-linear as I continue from 4k and double the block sizes; 32k 
blocks being the second-worst. The amount of wasted space was minimized by 
using 64k and 128k blocks.

It would appear that 64k is a good choice for volblocksize if you are using a 
zvol to back your VM, and the VM is using the virtual device for a zpool. 
Incidentally, I believe this is the default when creating VMs in FreeNAS.

- .Dustin



smime.p7s
Description: S/MIME cryptographic signature