> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nathan Kroenert
> I chopped into a few slices - p0 (partition table), p1 128GB, p2 60gb.
> As part of my work, I have used it both as a RAW device (cxtxdxp1) and
> wrapped partition 1 with a virtualbox created VMDK linkage, and it works
> like a champ. :) Very happy with that.
> I then tried creating a new zpool using partition 2 of the disk (zpool
> create c2d0p2) and then carved a zvol out of that (30GB), and wrapped
> *that* in a vmdk.
Why are you parititoning, then creating zpool, and then creating zvol?
I think you should make the whole disk a zpool unto itself, and then carve out
the 128G zvol and 60G zvol. For that matter, why are you carving out multiple
zvol's? Does your Guest VM really want multiple virtual disks for some reason?
Side note: Assuming you *really* just want a single guest to occupy the whole
disk and run as fast as possible... If you want to snapshot your guest, you
should make the whole disk one zpool, and then carve out a zvol which is
significantly smaller than 50%, say perhaps 40% or 45% might do the trick. The
zvol will immediately reserve all the space it needs, and if you don't have
enough space leftover to completely replicate the zvol, you won't be able to
create the snapshot. If your pool ever gets over 90% used, your performance
will degrade, so a 40% zvol is what I would recommend.
Back to the topic:
Given that you're on the SSD, there is no faster nonvolatile storage you can
use for ZIL log device. So you should leave the default ZIL inside the pool...
Don't try adding any separate slice or anything as a log device... But as you
said, sync writes will hit the disk twice. I would have to guess it's a good
idea for you to tune ZFS to immediately flush transactions whenever there's a
sync write. I forget how this is done - there's some tunable that indicates
anything sync write over a certain size should be immediately flushed...
zfs-discuss mailing list