Hi folks,

some extra thoughts:

1. Don't question why. :) I'm playing and observing, so I ultimately know and understand the best way to do things! heh. 2. In fairness, asking why is entirely valid. ;) I'm not doing things to best practice just yet - I wanted the best performance for my VM's, which are all testing/training/playing VM's. I got *great* performance from the first RAW PARTITION I gave to VirtualBox. I wanted to do the same, but due to the way it wraps paritions, and Solaris complains that there is more than one Solaris2 partition on the disk when I try to install the second instance, I thought I'd give zvols a go. 3. The device I wrap as a VMDK is the RAW device. sigh. Of course, all writes will go through the ZIL, and of course we'll have to write twice as much. I should have seen that straight away, but was lacking sleep. 4. Note: I don't have a separate ZIL. The first partition I made was given directly to virtualbox. The second was used to create the zpool.

I'm going to have a play with using LVM md devices instead and see how that goes as well.

Overall, the pain of the doubling of bandwidth requirements seems like a big downer for *my* configuration, as I have just the one SSD, but I'll persist and see what I can get out of it.

Thanks for the thoughts thus far!



On 21/11/2012 8:33 AM, Fajar A. Nugraha wrote:
On Wed, Nov 21, 2012 at 12:07 AM, Edward Ned Harvey
<opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
Why are you parititoning, then creating zpool,
The common case it's often because they use the disk for something
else as well (e.g. OS), not only for zfs

and then creating zvol?
Because it enables you to do other stuff easier and faster (e.g.
copying files from the host) compared to using plain disk image files

I think you should make the whole disk a zpool unto itself, and then carve out 
the 128G zvol and 60G zvol.  For that matter, why are you carving out multiple 
zvol's?  Does your Guest VM really want multiple virtual disks for some reason?

Side note:  Assuming you *really* just want a single guest to occupy the whole 
disk and run as fast as possible...  If you want to snapshot your guest, you 
should make the whole disk one zpool, and then carve out a zvol which is 
significantly smaller than 50%, say perhaps 40% or 45% might do the trick.
... or use sparse zvols, e.g. "zfs create -V 10G -s tank/vol1"

Of course, that's assuming you KNOW that you never max-out storage use
on that zvol. If you don't have control over that, then using smaller
zvol size is indeed preferable.

zfs-discuss mailing list

Reply via email to