Scenario: 4.5/i386, dumping filesystems as follows (via daily.local):
level 0 dump on Monday, level 1 dump on Tuesday, etc, level 6 dump on Saturday.
Then level 0 again on Monday, and erase the old level>0 dumps.
Now, I changed one of the dumped filesystems in the following way:
> --- /var/backups/disklabel.sd0.current Mon May 25 08:14:43 2009
> +++ /var/backups/disklabel.sd0 Wed Nov 25 08:10:09 2009
> @@ -25,5 +25,4 @@
> c: 625142448 0 unused
> d: 20980890 1060290 4.2BSD 2048 16384 1
> e: 41945715 22041180 4.2BSD 2048 16384 1
> - f: 251674290 63986895 4.2BSD 2048 16384 1
> - g: 309476160 315661185 4.2BSD 2048 16384 1
> + f: 561150450 63986895 4.2BSD 2048 16384 1
That is, what used to be two separate filesystems (sd0f and sd0g)
is now sd0f (one filesystem the size of the sum). I restored the
original content of sd0f on the (now bigger) sd0f partition from
a level 0 dump. (sd0g was moved elsewhere, and is sd1a now.)
Now, when dumping /dev/sd0f during the next daily dump,
which happens to be a level 2 dump, the _whole_ filesystem
seems to get dumped (as if it were level 0).
I just want to make sure this is to be expected: dump just dumps
everything that changed from the last lower-level dump, which happens
to be everything, _because_ the whole filesystem was re-created.
Right? I don't suppose "dump ; newfs ; restore" preserves inode
numbers for instance, so every single inode has 'changed', right?
Is there something I can do (dumpdates?) to be able to "dump | restore"
and not 'break' the incremental dump cycle, or should I just schedule
any future "dump | restore" with my level 0 dump day?
Thanks for your time
Jan