On 2013-01-22 23:03, Sašo Kiselkov wrote:
On 01/22/2013 10:45 PM, Jim Klimov wrote:
On 2013-01-22 14:29, Darren J Moffat wrote:
Preallocated ZVOLs - for swap/dump.


Or is it also supported to disable COW for such datasets, so that
the preallocated swap/dump zvols might remain contiguous on the
faster tracks of the drive (i.e. like a dedicated partition, but
with benefits of ZFS checksums and maybe compression)?

I highly doubt it, as it breaks one of the fundamental design principles
behind ZFS (always maintain transactional consistency). Also,
contiguousness and compression are fundamentally at odds (contiguousness
requires each block to remain the same length regardless of contents,
compression varies block length depending on the entropy of the contents).

Well, dump and swap devices are kind of special in that they need
verifiable storage (i.e. detectable to have no bit-errors) but not
really consistency as in sudden-power-off transaction protection.
Both have a lifetime span of a single system uptime - like L2ARC,
for example - and will be reused anew afterwards - after a reboot,
a power-surge, or a kernel panic.

So while metadata used to address the swap ZVOL contents may and
should be subject to common ZFS transactions and COW and so on,
and jump around the disk along with rewrites of blocks, the ZVOL
userdata itself may as well occupy the same positions on the disk,
I think, rewriting older stuff. With mirroring likely in place as
well as checksums, there are other ways than COW to ensure that
the swap (at least some component thereof) contains what it should,
even with intermittent errors of some component devices.

Likewise, swap/dump breed of zvols shouldn't really have snapshots,
especially not automatic ones (and the installer should take care
of this at least for the two zvols it creates) ;)

Compression for swap is an interesting matter... for example, how
should it be accounted? As dynamic expansion and/or shrinking of
available swap space (or just of space needed to store it)?

If the latter, and we still intend to preallocate and guarantee
that the swap has its administratively predefined amount of
gigabytes, compressed blocks can be aligned on those starting
locations as if they were not compressed. In effect this would
just decrease the bandwidth requirements, maybe.

For dump this might be just a bulky compressed write from start
to however much it needs, within the preallocated psize limits...

//Jim

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to