Am Mon, 26 Feb 2007 06:55:59 +0100 schrieb Les Mikesell  
<[EMAIL PROTECTED]>:

> Gerhard Brauer wrote:
>
>> * Michael Pellegrino <[EMAIL PROTECTED]> schrieb am [25.02.07 15:49]:
>>> Gerhard Brauer wrote:
>>>> I would to setup a enviroment, where i need a outhouse backup. My idea
>>>> is, to use 2 USB-HD's in a daily change.

last time i answered Gerhard Brauers question to his PM - my fault, i  
thought a reply should go to the list..

I suggested to try rsync cause it keeps transfer low and is fast with my  
experience, so he could make an automated backup. The problem with rsync  
was that it seems not to handle a huge amount of hardlinks as it should.

Therefore i thought of how to get a rsync-friendly version of backuppc's  
data-pool, here is my - theoretical solution:

There are two simple scripts that do exactly and only these things:
- stop backuppc
- hardlink every file to a (from backuppc) seperated directory, the name  
of the file should be the inode-number the original file links to, save  
the filename (including the directory) into another file containing one  
filename per line
- if the file for this entire inode exists just add another line to the  
filenames-file
- wait for rsync to ne finished and remove the entire directory again (as  
backuppc reacts on the count of hardlinks to files i do not exactly know  
what would happen if the directory is kept)

restoring is simple too:
first copy these files on the disk (filesystem) where backuppc wants to  
find them, but NOT in backuppc's directory.

start the script wich walks through all inode-files, using their  
filenames-file to hardlink the file to all filenames stored in it.

The small shell-scripts at the end of this mail show what i am thinking  
of. Excessively testing is needed before usage on real data ;) no warranty  
in any way ! I figured out that the grep command inside seems to have  
problems with filenames containing chars like "[". As i read that backuppc  
changes the filenames in a special way this scripts may work - or not !

They are more like a suggestion to be included to backuppc's engine wich  
then could include a backup-mechanism for itself by creating an  
rsync-friendly directory, starting rsync by itself, this way the own  
backup could be just a job to define like other backuppc-jobs.

--------- make_hardlinked_dir.sh
#!/bin/bash

# create hardlinks on folders with many hardlinks for rsync to have less  
to do with them ..

source=source
destination=destination

find "$source" -type f -exec ls -i {} \; | while read line
   do
         inode="$( echo "$line" | cut -d\  -f 1 )"
         filename="$( echo "$line" |cut -c $(( ${#inode} + 1 ))- )"

         if [ -f "${destination}/i-${inode}." ]
           then
                 grep -- "^${filename}$" "${destination}/i-${inode}.names"  
> /dev/null || \
                   echo "${filename}" >> "${destination}/i-${inode}.names"
           else
                 ln "${filename}" "${destination}/i-${inode}."
                 echo "${filename}" > "${destination}/i-${inode}.names"
         fi
done
-------- end

-------- restore hardlinked_dir.sh
#!/bin/bash

origin=destination
new=recovered

for i in "$origin/i-"*"."
  do
         while read line
           do
                 dir="$(dirname "$line" )"
                 mkdir -p "$new/$dir"
                 ln "$i" "$new/$line"
                 echo "created from $i: $line"
         done < ${i}names
done
-------- end

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to