Les Mikesell writes:

> On Tue, 2006-11-14 at 10:49 +0100, Klaas Vantournhout wrote:
> > Dear users and developers of the mighty backup program,
> > 
> > I was wondering if it is possible to put a certain full backup or
> > incremental in the attic.  It is can we save a certain backup
> > permanently disregarding changes in the config file and so on?
> 
> I'll second this as a feature request that could be implemented
> by duplicating the hardlinks of a backup tree into different
> area to be saved until explicitly deleted and browsed in a different
> way so they could remain even if the host entry and pc/* directory
> were removed.  This would be very handy for decommissioning a box
> or to hold a copy made before a major update or change until you
> are sure it is no longer needed. 

If you just rename the directory and rename the host (eg: HOST_old)
then the right thing should happen.

> Meanwhile the best approach is probably to make an archive copy
> as a tar image either with the "archive host" support or
> BackupPC_tarCreate.  These have the advantage of being able to
> survive the backuppc installation since all you need is tar to
> restore from them - and the disadvantage of not pooling space
> with other copies of the same files.

For 3.0.0 I wrote a script bin/BackupPC_tarPCCopy that, given one or
more pc paths (eg: TOPDIR/pc/HOST or TOPDIR/pc/HOST/nnn), creates
a tar archive with all the hardlinks pointing to ../cpool/....
Any files not hardlinked (eg: backups, LOG etc) are included
verbatim.

My plan was to provide this as a copy mechanism:

 - copy cpool using any technique (like cp or rsync) that doesn't
   need to preserve hardlinks

 - use bin/BackupPC_tarPCCopy to copy the PC directories, and
   the tar restore will re-establish the hardlinks into the
   existing cpool.

You could also use it to clone a pc tree on the same host, eg:

    mkdir newdir
    cd newdir
    ln -s TOPDIR/cpool
    BINDIR/BackupPC_tarPCCopy TOPDIR/pc/HOST | tar xf -

Use tar tvf - instead to see what it is really doing.

Since it knows how files are hashed it doesn't need to search
all of cpool to cache the inodes.  That said, I haven't tested
it very much and it is still quite slow because many (most?)
files take a significant disk seek.

As it proceeds it does cache inodes so that the same file can
be immediately matched.  The cache is removed on each ARGV.
Using the -c option turns off caching in case the memory
usage is too high.

Craig

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to