On 6/30/2011 2:04 PM, C. Ronoz wrote: > I ran bacula and it backed up this ENTIRE client in like 7 minutes. After 40 > minutes, backuppc has only backed up a measly 300MB... of a total of 1.5GB. I > am not even sure how much aready was in-use on the partition after creating > the new partition. > > root@backuppc:~# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sda1 19G 1.4G 17G 8% / > tmpfs 502M 0 502M 0% /lib/init/rw > udev 497M 112K 497M 1% /dev > tmpfs 502M 0 502M 0% /dev/shm > /dev/mapper/vg0-lvol0 > 197G 512M 195G 1% /var/lib/backuppc > root@backuppc:~# date > Thu Jun 30 20:05:49 CEST 2011 > root@backuppc:~# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sda1 19G 1.4G 17G 8% / > tmpfs 502M 0 502M 0% /lib/init/rw > udev 497M 112K 497M 1% /dev > tmpfs 502M 0 502M 0% /dev/shm > /dev/mapper/vg0-lvol0 > 197G 565M 195G 1% /var/lib/backuppc > root@backuppc:~# date > Thu Jun 30 20:24:42 CEST 2011 > > How can I decrease load? Can I disable deduplication or compression? The load > is very high. This back-up server (virtual machine) has a powerful processor, > although only 1GB memory (that is not fully used). Even running this simple 1 > job is very very slow. See http://images.codepad.eu/v-ISmSn6.png for high > cpu usage.
Running in a VM imposes a lot of overhead. Running LVM on top of a file based disk image pretty much guarantees that your disk block writes won't be aligned with the physical disk which makes things much slower. Can you at least give the vm a real partition if that isn't one already? And you definitely need to be sure you aren't sharing that physical disk with anything else. More ram would probably help just by providing more filesystem buffering even if you don't see it being used otherwise. You can turn off compression, but unless CPU capacity is the problem it won't help and might make things worse due to more physical disk activity. > Last time this back-up ran succesfully for the entire server (1.7GB) it took > more than 12 hours (after which I manually canceled the back-up). This > back-up job ran in 7 minutes(!) on Bacula. I however would prefer to use > BackupPC in the future and I hope people can help me getting decent > performance. Backuppc will never be as fast as other systems, but the main situations where the difference should be big are where you have a huge number of small files (enough that the copy of the directory that is transferred first pushes into swap) or when copying huge files with differences where the server has to uncompress the existing copy and reconstruct it. After you have completed 2 fulls, you may see a speed increase on unchanged files if you are using the --checksum-seed option. -- Les Mikesell lesmikes...@gmail.com ------------------------------------------------------------------------------ All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/