> You can validate a stream stored as a file at any
> time using the "zfs receive -n" option. 

Interesting.  Maybe it's just a documentation issue, but the man page doesn't 
make it clear that this command verifies much more than the names in the 
stream, and suggests that the rest of the data could just be skipped over.

If indeed this command does thoroughly process and validate the stream, without 
actually writing anything to disk, that would be very useful and should be 
advertised clearly.

> Personally, I prefer to use -n and -u,
> but -u is a relatively new option.

I don't get how they combine, from the descriptions.  It seems to me that with 
-n there's no filesystem being created for -u to then not mount.  Again, maybe 
this is the result of misleading descriptions.

> Therefore, the procedures we've used for decades
> still works:
>       1. make backup
>       2. verify backup
>       3. breathe easier

That's what I want, of course.  The best/only way I have found is to store the 
backup recv'd in a pool.  This gives me:
 * validation of correct transfer, which I don't get any other way that I've 
found so far.
 * version upgrade compatibility guarantees. The zfs on-disk format is the only 
one for which this is presently true, and which preserves properties, metadata, 
etc.   I actually like this: one well-tested historical compatibility path is 
possibly better than maintaining multiple formats each with compatibility 
quirks.
 * redundancy, compression, and other zfs goodness for backup media
 * the ability to manage backup cycles and space to the size of the 
destination, thus detecting problems before the time-consuming part when 
writing out media.
 * the ability to browse and explore content, or restore individual files if 
needed, though this is of less immediate concern (that's what snapshots are 
for, at least in the common case)

However, I do get the attraction of storing backups as files.  I just use a 
different file format:  

I have taken to making backup pools out of files the size of whatever removable 
media I plan on storing the backup on.  When the backup pool is ready, I can 
export it, and gpg the files as they're written out as an archive copy of the 
backup pool.  Then I reimport the pool and keep sending backups to it.   This 
is for home, and this scheme lets me separate the "making a second copy" from 
the "making an offsite archive" parts of the cycle, to suit my available time.

*Then* I breathe easier.  :-)

I got burnt (thankfully only in testing) by a previous attempt to use mirrors 
and resilvering with such files.  They're ~useless once detached. The downside 
is the need to completely re-write the offsite copies (no smart resilver, but 
irrelevant for dvd or tapes), and the need to read all files back in before 
restoring.  I only plan on needing that for a full post-disaster rebuild, so no 
biggie there.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to