On Fri, 2 Feb 2018, Iturriaga Woelfel, Markus wrote:



I used rsync to copy data including hard links:
rsync -avxHAX --progress /var/lib/backuppc /copy



Because of all the hard links, rsync gets incredibly slow and uses a
huge amount of space in /tmp when trying to copy BackupPC (v.3)

Yes this is v 3.3.0.

directories. Since it looks like you're doing this on the same system,
something like a tar pipe may work better for you.

I gave up the rsync, due to hardlinks - even thou it does the right thing, and was almost there. The /tmp is fine, but the speed is 3GB per day on hardlinks, this is terrible.

I tried the "dump | restore" way,

dump -0 -f - /var/lib/backuppc | restore -r -f -

after few minutes:
restore: cannot write to file /tmp//rstdir1517604355: No space left on
device
  DUMP: Broken pipe
  DUMP: The ENTIRE dump is aborted.

seems restore is first writing some files to /tmp.. ok used
dump -0 -f - /var/lib/backuppc | restore -r -T /copy -f -

when it gets to the
DUMP: dumping (Pass IV) [regular files]

it just stays there for 12h with almost nothing copied.

tar -C /var/lib/backuppc --one-filesystem --acls --xattrs -cf - . | tar
-C /copy -xvf -

How does this deal with hardlinks?

I am running short on ideas how to copy this.

This seems to be yet another reason why I want to move this to RAID1 now....

(double check that for sanity). You can achieve something similar with
the dump/restore commands if t$

I've had some success with this. PS - removing the "verbose" part will
likely speed up your transfer $

Markus


Thanks anyway

Adam Pribyl

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to