-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Reinhold Schoeb wrote: > I set up a BackupPC server with several clients (1 MacPro, 2 MacBooks, 2 > Linux > PC and 2 Linux laptops). All clients are connected via Gigabit LAN with full > speed. My server is an AMD Sempron 2.2 GHz, 1 GB RAM, 4 x 750 MB SATA > harddisks connected by LVM and Debian Lenny. > > Everthing works perfect, but : Backup Speed is very low, for all machines > between 3 and 10 MB/s. I tried all options: with or without compression, > rsync, rsyncd, tar. Without any enhancement. All basic speed performances are > perfect : HD speed, LAN speed. > > That takes me to my question : The full backup size of my MacPro is about 1.6 > TB. It took 2250 minutes or 1.5 days with 9 MB/s to make a full backup and I > had to not switch off the MacPro during that time. An incremental backup took > about 100 minutes - that was ok. But after 7 incremental backups BackupPC is > now trying to make another full backup. But since the MacPro is only running > during daytime, it does not come to an end.
You didn't say which backup method you used when you timed those backups. Generally speaking, rsync would be the best/quickest method in my opinion, since you will transfer the least amount of data.... Basically, like any performance issue, you will need to find the bottleneck, and resolve it, and then find the next bottleneck and resolve it, etc... until you get the performance you need (not the performance you want which is usually too expensive)... Main things to look at: Memory consumption on client and server Disk IO on client and server CPU utilisation on client and server Some things that can reduce CPU with SSH+rsync is * Turning off encryption for SSH * Turning off compression on SSH and in backuppc storage Helping with IO: * On the server use RAID10 instead of RAID5/6 (you didn't tell us how your 4 x 750GB drives are configured) * Check what FS you are using on the server, and perhaps use some additional flags such as noatime (if appropriate) etc Also, look at how many backups are happening at the same time, consider reducing this to a single backup at least for performance tuning so you can work out whether what you have changed is better/worse. Of course, also check the backuppc wiki for more information/detail on performance tuning... > How is the right way to deal with that ? Is it possible or makes it sense to > avoid additional full backups ? You also haven't told us which version of backuppc you are using... In any case, it is important to do regular full backups, this can be more/less regular depending on your requirements. I increased the frequency to every 3 days to improve performance (using backuppc 2.1.2pl1 and doing remote backups). Hope that helps, for more assistance review the wiki, and then come back and provide some more information and more measurements... Regards, Adam -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkmrE2IACgkQGyoxogrTyiXwNgCdFuA9O3ww5Rvo4P0SnHnZuOmf PuAAnA/OF1GP56L1oX6CwDuxvye/W7p4 =zh+1 -----END PGP SIGNATURE----- ------------------------------------------------------------------------------ Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA -OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise -Strategies to boost innovation and cut costs with open source participation -Receive a $600 discount off the registration fee with the source code: SFAD http://p.sf.net/sfu/XcvMzF8H _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
