On 7/13/06, Darren Reed <[EMAIL PROTECTED]> wrote:
When ZFS compression is enabled, although the man page doesn't
explicitly say this, my guess is that only new data that gets
written out is compressed - in keeping with the COW policy.

[ ... ]

Hmmm, well, I suppose the same problem might apply to
encrypting data too...so maybe what I need is a zfs command
that will walk the filesystem's data tree, read in data and
write it back out according to the current data policy.


It seems this could be made a function of 'zfs scrub' -- instead of
simply verifying the data, it could rewrite the data as it goes.

This comes in handy in other situations.  For example, with the
current state of things, if you add disks to a pool that contains
mostly static data, you don't get the benefit of the additional
spindles when reading old data.  Rewriting the data would gain you
that benefit, plus it would avoid the new disks becoming the hot spot
for all new writes (assuming the old disks were very full.)

Theoretically this could also be useful in a live data migration
situation, where you have both new and old storage connected to a
server.  But this assumes there would be some way to tell ZFS to treat
a subset of disks as read-only.

Chad Mynhier
http://cmynhier.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to