On 7/26/11 12:24 AM, C. Ronoz wrote:
>> this comes up frequently. Usually, the problem is sitting in front of the
>> screen. Have you actually got a problem, or are the numbers just confusing
>> you? Probably the latter, else you would be quoting facts ("my backups are
>> taking ... for ... GB"), not statistics ("BackupPC says it has transferred
>> ... MB/s. Some other program required more bandwidth, so BackupPC's
>> performance must be poor").
> I don't know what to do with your answer to be honest as it seems mostly a
> personal insult from a person who hasn't read or understood something
> correctly. The BackupPC statistics are pretty obvious and I am wondering if
> BackupPC performance of 1.5MB per second is normal as other alternatives as
> Amanda, Bacula and rsync easily reach throughput rates of 15-20MB per second.
> But I'd prefer using BackupPC.
The main point is that if backuppc can complete your backups in your available
time window, you can use it even if it uses less bandwidth.
> Everything is currently fine, except for a poor performance. The highest
> performing back-ups are done with 2.5MB per second and the slowest with 300KB
> per second. The most crowded server has 15,3GB worth of data, the smallest
> one has about 2GB of data.
Backuppc probably won't quite match the throughput of other systems because it
is doing more with pooling and possibly compression. The activity isn't
directly comparable to other systems because full runs with rsync do a block
checksum verification and the server side of the rsync operation is in perl -
and you can run a few targets concurrently.
>> I'm considering a feature request to always display "GB/s" as unit on the web
>> page (and a number between 1 and 9 as value). Or to make it removable by
>> configuration (with the default being "off"). At the very least, the numbers
>> should be scaled up to correspond to what naive users might compare them to.
>> Exact figures of what is actually happening are only meaningful to people who
>> understand what is actually happening.
> Then your assumption is I have no idea what the statistics mean? MB per
> second seems pretty obvious here.
>
> This was the initial run of the (plesk01) server lasting 212(!) minutes for
> only 15GB of data. A simple calculation 15000/(212*60) shows a back-up
> performance of 1.5MB per second. With loads of<0.1 and 1gbit network cards,
> it seems BackupPC should be able to do much better?
> 0 full yes 0 7/24 07:29 212.8 2.0
> /var/lib/BackupPC//pc/plesk01/0
You'll only do an initial run once. If you are using rsync and have enabled
checksum caching, you should see an improvement in elapsed time on the 3rd and
subsequent full runs although the bytes/sec. measurement may be low because you
are only transferring changes. In normal use, you'll also probably skew the
days where fulls and incrementals run on different systems to get a mix of fast
and slow runs to cover more systems in your nightly window.
--
Les Mikesell
[email protected]
------------------------------------------------------------------------------
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/