I was just thinking about syncing servers. What if we just made an effort
to sync the pool/cpool directory and the config files and then for the rest
of the files in the pc/ directory run a script on the backuppc side to
discover all the hard links. They push that list to the copy and have the
copy recreate the directories and hardlinks. Rsync's problem is the
filecount. You can reduce this by an order of magnitude by skipping the pc
directories.
Any thoughts?
On Sun, Aug 30, 2009 at 1:34 PM, Jeffrey J. Kosowsky
<backu...@kosowsky.org>wrote:
> Les Mikesell wrote at about 14:26:47 -0500 on Friday, August 28, 2009:
> > Jim Wilcoxson wrote:
> > > Michael - I have a new LInux/FreeBSD backup program, HashBackup, in
> > > beta that I believe will handle a large backuppc server. In tests, it
> > > will backup a single directory with 15M (empty) files/hardlinks, with
> > > 32000 hard links to each file, and can do the initial and incremental
> > > backups on this directory in about 45 minutes on a 2005 AMD box with
> > > 1GB of memory.
> > >
> > > HashBackup can also send backups offsite via FTP, ssh accounts, or to
> > > Amazon S3. I'd be very interested in feedback if anyone would like to
> > > try it on their BackupPC server.
> > >
> > > The beta site is:
> > >
> > > http://sites.google.com/site/hashbackup
> > >
> > > Of course, you're welcome to contact me via email with questions.
> >
> > What kind of speed would you expect from this on real files? I let it
> > run about 20 hours and it had only made it halfway through a pool of
> > around 600 gigs (where an image copy of the partition takes a bit over 2
> > hours). Should incrementals be faster if it ever makes it though the
> > first run?
> >
>
> I would be interested in knowing more about how hashbackup works.
>
> Earlier Holger and I (and perhaps others) had a thread about how to
> use the special structure of the Backuppc pool and pc directories to
> speed backups. In particular we know that all the relevant inodes
> (other than zero length files and directory entries) occur exactly
> once in the pool tree. Similarly, every non-zero length file in the pc
> directory (other than the log files and info files at the top level)
> corresponds to exactly one entry in the pool directory. Also, for
> incrementals, we know that inodes in general don't change except for
> the limited case of chain renumbering which itself could be
> potentially tracked.
>
> If hashbackup doesn't use this special structure to its advantage then
> I would indeed expect it to be substantially slower than a simple
> low-level filesystem copy. On the other hand, if the structure is used
> to advantage then a copy could conceivably be done with limited
> overhead and roughly speaking with O(n) or at most O(n log n) scaling.
>
>
>
> ------------------------------------------------------------------------------
> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
> trial. Simplify your report design, integration and deployment - and focus
> on
> what you do best, core application coding. Discover what's new with
> Crystal Reports now. http://p.sf.net/sfu/bobj-july
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report design, integration and deployment - and focus on
what you do best, core application coding. Discover what's new with
Crystal Reports now. http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/