On 2012-Nov-19 21:10:56 +0100, Jim Klimov <jimkli...@cos.ru> wrote:
>On 2012-11-19 20:28, Peter Jeremy wrote:
>> Yep - that's the fallback solution.  With 1874 snapshots spread over 54
>> filesystems (including a couple of clones), that's a major undertaking.
>> (And it loses timestamp information).
>Well, as long as you have and know the base snapshots for the clones,
>you can recreate them at the same branching point on the new copy too.

Yes, it's just painful.

>Also, while you are at it, you can use different settings on the new
>pool, based on your achieved knowledge of your data

This pool has a rebuild in its future anyway so I have this planned.
 - perhaps using
>better compression (IMHO stale old data that became mostly read-only
>is a good candidate for gzip-9), setting proper block sizes for files
>of databases and disk images, maybe setting better checksums, and if

>your RAM vastness and data similarity permit - perhaps employing dedup

After reading the horror stories and reading up on how dedupe works,
this is definitely not on the list.

>(run "zdb -S" on source pool to simulate dedup and see if you get any
>better than 3x savings - then it may become worthwhile).

Not without lots more RAM - and that would mean a whole new box.

>Perhaps, if the "zfs diff" does perform reasonably for you, you can
>feed its output as the list of objects to replicate in rsync's input
>and save many cycles this way.

The starting point of this saga was that "zfs diff" failed, so that
isn't an option.

On 2012-Nov-19 21:24:19 +0100, Jim Klimov <jimkli...@cos.ru> wrote:
>fatally difficult scripting (I don't know if it is possible to fetch
>the older attribute values from snapshots - which were in force at
>that past moment of time; if somebody knows anything on this - plz

The best way to identify past attributes is probably to parse
"zfs history", though that won't help for "received" attributes.

Peter Jeremy

Attachment: pgpgjjcrpOhyK.pgp
Description: PGP signature

zfs-discuss mailing list

Reply via email to