Holger Parplies wrote:
>
> that is correct *only for rsync type transport* (meaning rsync or rsyncd).
> Actually not "because they exist in the pool", but rather "because they were
> the same in the host's previous backup". If you rename a file, it will be
> transfered but not written to disk on the BackupPC server (not even
> temporarily!), but rather linked to the identical pool file.
>
>
> The BackupPC side does not store the files in plain native format - at least
> for a compressed pool. Compression both makes checksum calculation more
> expensive and allows for storing the checksums in the compressed file so they
> will not need to be recomputed. Checksum caching is enabled by adding the
> --checksum-seed=32761 option to $Conf {RsyncArgs}. See config.pl, comment
> before $Conf{RsyncCsumCacheVerifyProb} or in $Conf{RsyncArgs}.
> I'm not sure whether checksum caching works with uncompressed pools.
>
>
>
I'll play with that option. It sounds as if I really should have just
waited to see what happened the next time I did a full backup. But
given that I'm going on 96 hours and it's still backing up, I wasn't
keen on having that go on again. One things that I saw Craig Barratt
mention in a recent thread is that RsyncP uses whatever the last backup
is as a basis for it's transfer. It almost sounds like it's using the
equivalent of rsync's --link-dest option. So that's a very good thing.
I used to back up this server with straight rsync and computing the
transfer lists would take upwards of 30 minutes, but I can certainly
live with that if the backup itself only takes an hour or two.
>> [...]
>> The servers sit on a gigabit network, but the backup server cpu sits
>> at 100% while the backup is going on.
>>
>
> That should probably improve (and thus backup performance) with checksum
> caching - starting from the third backup - because decompression and
> checksum computing on the BackupPC server are no longer performed.
>
>
>> The server I'm backing up is significantly faster and only uses about 15%.
>> The backup server as a gig of memory. The backed up server as 2 gigs.
>>
>
> As you're already using rsync for that setup, memory seems to be sufficient
> for the file lists (or are you experiencing thrashing? That would explain
> extremely long backup times ...).
>
>
No thrashing. Both servers have plenty of free memory. I'm waiting to
see what happens to memory when I start rsync'ing the entire server to
replicate it to an offsite backup. I have a feeling a gig might not be
enough once I start pulling in lists of every file in the pool, but
we'll see what happens...
> Quoting Craig Barratt:
>
> Backup speed is affected by many factors, including client speed
> (cpu and disk), server speed (cpu and disk) and network throughput
> and latency.
>
> With a really slow network, the client and server speed won't matter
> as much. With a fast network and fast client, the server speed will
> dominate. With a fast network and fast server, the client speed will
> dominate. In most cases it will be a combination of all three.
>
> If you're using rsync over ssh, changing the cipher to blowfish might make a
> difference, because it would reduce the amount of computing your BackupPC
> server has to do.
>
I've done that previously with other solutions, but I haven't played
with that yet in backuppc. I'll give that a shot once the current
backup is done.
Thanks!
--Jason
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/