Miles Nordin wrote:
>>>>>> "mm" == Michael McKnight <michael_mcknigh...@yahoo.com> writes:
>>>>>>             
>
>     mm> #split -b8100m ./mypictures.zfssnap mypictures.zfssnap.split.
>     mm> #cat mypictures.zfssnap.split.a[a-g] > testjoin
>
>     mm> But when I compare the checksum of the original snapshot to
>     mm> that of the rejoined snapshot, I get a different result:
>
> sounds fine.  I'm not sure why it's failing.
>
>     mm> And when I try to restore the filesystem, I get the following
>     mm> failure: #zfs recv pool_01/test < ./testjoin cannot receive
>     mm> new filesystem stream: invalid stream (checksum mismatch)
>
> however, aside from this problem you're immediately having, I think
> you should never archive the output of 'zfs send'.  I think the
> current warning on the wiki is not sufficiently drastic, but when I
> asked for an account to update the wiki I got no answer.  Here are the
> problems, again, with archiving 'zfs send' output:
>
>  * no way to test the stream's integrity without receiving it.
>    (meaning, to test a stream, you need enough space to store the
>    stream being tested, plus that much space again.  not practical.)
>    A test could possibly be hacked up, but because the whole ZFS
>    software stack is involved in receiving, and is full of assertions
>    itself, any test short of actual extraction wouldn't be a thorough
>    test, so this is unlikely to change soon.
>
>  * stream format is not guaranteed to be forward compatible with new
>    kernels.  and versioning may be pickier than zfs/zpool versions.
>   

Backward compatibility is achieved.

>  * stream is expanded _by the kernel_, so even if tar had a
>    forward-compatibility problem, which it won't, you could
>    hypothetically work around it by getting an old 'tar'.  For 'zfs
>    send' streams you have to get an entire old kernel, and boot it on
>    modern hardware, to get at your old stream.
>   

An enterprising community member could easily put together a
utility to do a verification.  All of the necessary code is readily
available.

>  * supposed to be endian-independent, but isn't.
>   

CR 6764193 was fixed in b105
http://bugs.opensolaris.org/view_bug.do?bug_id=6764193
Is there another?

>  * stream is ``protected'' from corruption in the following way: if a
>    single bit is flipped anywhere in the stream, the entire stream and
>    all incrementals descended from it become worthless.  It is
>    EXTREMELY corruption-sensitive.  'tar' and zpool images both
>    detect, report, work around, flipped bits.  The 'zfs send' idea is
>    different: if there's corruption, the designers assume you can just
>    restart the 'zfs send | zfs recv' until you get a clean go---what
>    you most need is ability to atomically roll back the failed recv,
>    which you do get.  You are not supposed to be archiving it!
>   

This is not completely accurate.  Snapshots which are completed
are completed.

>  * unresolved bugs.  ``poisonous streams'' causing kernel panics when
>    you receive them, 
> http://www.opensolaris.org/jive/thread.jspa?threadID=81613&tstart=0
>
> The following things do not have these problems:
>
>  * ZFS filesystems inside file vdev's (except maybe the endian
>    problem.  and also the needs-whole-kernel problem, but mitigated by
>    better forward-compatibility guarantees.)
>   

Indeed, but perhaps you'll find the grace to file an appropriate RFE?

>  * tar files
>
> In both alternatives you probably shouldn't use gzip on the resulting
> file.  If you must gzip, it would be better to make a bunch of tar.gz
> files, ex., one per user, and tar the result.  Maybe I'm missing some
> magic flag, but I've not gotten gzip to be too bitflip-resilient.
>
> The wiki cop-out is a nebulous ``enterprise backup ``Solution' ''.
>   

Perhaps it would satisfy you to enumerate the market's Enterprise
Backup Solutions?  This might be helpful since Solaris does not
include such software, at least by my definition of Solaris.  So, the wiki
section "Using ZFS With Enterprise Backup Solutions" does in fact
enumerate them, and I don't see any benefit to repeating the enumeration.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Using_ZFS_With_Enterprise_Backup_Solutions

> Short of that you might make a zpool in a file with zfs compression
> turned on and rsync or cpio or zfs send | zfs recv the data into it.
>
> Or just use gtar like in the old days.  With some care you may even be
> able to convince tar to write directly to the medium.  And when you're
> done you can do a 'tar t' directly from medium also, to check it.  I'm
> not sure what to do about incrementals.  There is a sort of halfass
> incremental feature in gtar, but not like what ZFS gives.
>   

I suggest you consider an Enterprise Backup Solution.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to