Re: [BackupPC-users] Compression

2019-01-12 Thread Jan Stransky
3. Yes, there is certainly some confusion in client/host or host/server naming schemes :-) Actually, I could imagine that the rsync compression could be a reason for writing the custom perl version, which BackupPC use: You just don't uncompress and store the already compressed file... But I doubt

Re: [BackupPC-users] Compression

2019-01-12 Thread Robert Trevellyan
3. Sorry, I think of the machines being backed up as clients, but BackupPC does call them hosts. rsync supports compressed transfers but that's not the scheme used for storage by BackupPC. 4. You may be thinking of the tasks that check for unreferenced files and recalculate the total pool size,

Re: [BackupPC-users] Compression

2019-01-12 Thread Jan Stransky
Hi Robert, 1-2) This is what I would expect, I am currious if there is a way to gradually compress the files; not all at once. 3) By the host, I meant host being backed up. And I am sure, it is not used for the compression, unless compress option of rsync is used. But I guess, this is

Re: [BackupPC-users] Compression

2019-01-12 Thread Robert Trevellyan
Hi Jan, I think this is correct, but there are other experts who might chime in to correct me. 1. Migration will not result in compression of existing backups. It just allows V4 to consume the V3 pool. 2. After compression is turned on, newly backed up files will be compressed. Existing backups

Re: [BackupPC-users] Compression benchmark

2017-02-01 Thread Les Mikesell
On Wed, Feb 1, 2017 at 2:53 AM, Jan Stransky wrote: > > 3) Full backup of each dataset as separate host, then second with > already filled pool. Preferably from SSD to SSD to not be IO limited. > In practice if you use the --checksum-seed option with rsync the timing

Re: [BackupPC-users] Compression Experiences?

2014-10-13 Thread John Rouillard
On Mon, Oct 13, 2014 at 06:24:10AM +0200, Christian Völker wrote: I remember having read about restoring single files from command line needs some BackupPC specific script or tricks to uncompress the files when using ocmpression for BackupPC. I assume you mean using BackupPC_zcat. For a new

Re: [BackupPC-users] compression on quad core

2010-01-13 Thread Tino Schwarze
On Wed, Jan 13, 2010 at 09:25:47AM +0100, Thomas Scholz wrote: we using backuppc on an quad core system. Our backupprocess using only on core for poolcompression. Is there a way to get Compress::Zlib working multithreaded? You might want to run multiple backups in parallel... But AFAIK,

Re: [BackupPC-users] Compression Issue

2009-11-11 Thread Tino Schwarze
On Tue, Nov 10, 2009 at 03:42:53PM -0800, Heath Yob wrote: Excellent it looks that fixed it. That's kinda lame you can't just change the TopDir. Well it's a typical bootstrap problem. Where are you supposed to find your configuration file if it's relative to ${TopDir}? Therefore ${TopDir}

Re: [BackupPC-users] Compression Issue

2009-11-11 Thread Les Mikesell
Tino Schwarze wrote: On Tue, Nov 10, 2009 at 03:42:53PM -0800, Heath Yob wrote: Excellent it looks that fixed it. That's kinda lame you can't just change the TopDir. Well it's a typical bootstrap problem. Where are you supposed to find your configuration file if it's relative to

Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Matthias Meyer
Heath Yob wrote: It appears that I'm not getting any compression on my backups at least with my Windows clients. I think my mac clients are being compressed since it's actually stating a compression level in the host summary. I have the compression level set to 9. I have the

Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Matthias Meyer wrote: Heath Yob wrote: It appears that I'm not getting any compression on my backups at least with my Windows clients. I think my mac clients are being compressed since it's actually stating a compression level in the host

Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Heath Yob
According to my config.pl file : $Conf{CompressLevel} = '9'; So that's correct. ppo-backup:/CLIENTBACKUPS# du -sh cpool/ 12K cpool/ ppo-backup:/CLIENTBACKUPS# du -sm cpool/ 1 cpool/ There's nothing in my cpool directory. Thanks, Heath On Nov 10, 2009, at 1:34 AM, Adam Goryachev

Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Les Mikesell
Heath Yob wrote: According to my config.pl file : $Conf{CompressLevel} = '9'; So that's correct. ppo-backup:/CLIENTBACKUPS# du -sh cpool/ 12K cpool/ ppo-backup:/CLIENTBACKUPS# du -sm cpool/ 1 cpool/ There's nothing in my cpool directory. Does that /CLIENTBACKUPS directory mean

Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Heath Yob
I've changed the TopDir to /CLIENTBACKUPS. pc and cpool directories are in there now. I'm getting a bunch of errors like this on my PC clients: 2009-11-10 13:26:55 BackupPC_link got error -4 when calling MakeFileLink Thanks, Heath On Nov 10, 2009, at 8:55 AM, Les Mikesell wrote: Heath Yob

Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Les Mikesell
Heath Yob wrote: I've changed the TopDir to /CLIENTBACKUPS. pc and cpool directories are in there now. I'm getting a bunch of errors like this on my PC clients: 2009-11-10 13:26:55 BackupPC_link got error -4 when calling MakeFileLink If you install from the tarball, there is a

Re: [BackupPC-users] Compression during Xfer

2008-10-28 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Sebastien Sans wrote: Hello, The compression system of the pool in BackupPc is great, it save a lot of place, but I didn't found how to compress the tranfers in order to save my bandwidth. I tryed to modify the command line in rzync and tar

Re: [BackupPC-users] Compression during Xfer

2008-10-13 Thread Carl Wilhelm Soderstrom
On 10/10 11:54 , Sebastien Sans wrote: The compression system of the pool in BackupPc is great, it save a lot of place, but I didn't found how to compress the tranfers in order to save my bandwidth. Use compression in your ssh transport. Here's an example I typically use:

Re: [BackupPC-users] Compression during Xfer

2008-10-13 Thread Tomasz Chmielewski
Carl Wilhelm Soderstrom schrieb: On 10/10 11:54 , Sebastien Sans wrote: The compression system of the pool in BackupPc is great, it save a lot of place, but I didn't found how to compress the tranfers in order to save my bandwidth. Use compression in your ssh transport. Here's an example I

Re: [BackupPC-users] Compression during Xfer

2008-10-13 Thread Carl Wilhelm Soderstrom
On 10/13 02:56 , Tomasz Chmielewski wrote: $Conf{RsyncClientCmd} = '$sshPath -C -o CompressionLevel=9 -c blowfish-cbc -q -x -l rsyncbakup $host $rsyncPath $argList+'; Unless you're using an obsoleted SSH protocol in version 1, setting CompressionLevel does not make any sense - SSH

Re: [BackupPC-users] Compression during Xfer

2008-10-10 Thread Tomasz Chmielewski
Sebastien Sans schrieb: Hello, The compression system of the pool in BackupPc is great, it save a lot of place, but I didn't found how to compress the tranfers in order to save my bandwidth. I tryed to modify the command line in rzync and tar modes to activate compression (i added -z

Re: [BackupPC-users] Compression level

2007-12-05 Thread John Pettitt
Craig Barratt wrote: Rich writes: I don't think BackupPC will update the pool with the smaller file even though it knows the source was identical, and some tests I just did backing up /tmp seem to agree. Once compressed and copied into the pool, the file is not updated with future

Re: [BackupPC-users] Compression level

2007-12-05 Thread Rich Rauenzahn
John Pettitt wrote: What happens is the newly transfered file is compared against candidates in the pool with the same hash value and if one exists it's just linked, The new file is not compressed. It seems to me that if you want to change the compression in the pool the way to go

Re: [BackupPC-users] Compression level

2007-12-05 Thread John Pettitt
Rich Rauenzahn wrote: I know backuppc will sometimes need to re-transfer a file (for instance, if it is a 2nd copy in another location.) I assume it then re-compresses it on the re-transfer, as my understanding is the compression happens as the file is written to disk.(?) Would it

Re: [BackupPC-users] Compression level

2007-12-04 Thread Rich Rauenzahn
[EMAIL PROTECTED] wrote: Hello, I would like to have an information about compression level. I'm still doing several tests about compression and I would like to have your opinion about something : I think that there is a very little difference between level 1 and level 9. I tought that

Re: [BackupPC-users] Compression level

2007-12-04 Thread Craig Barratt
Rich writes: I don't think BackupPC will update the pool with the smaller file even though it knows the source was identical, and some tests I just did backing up /tmp seem to agree. Once compressed and copied into the pool, the file is not updated with future higher compressed copies. Does

Re: [BackupPC-users] Compression level

2007-12-04 Thread romain . pichard
PROTECTED] 05/12/2007 08:00 A Rich Rauenzahn [EMAIL PROTECTED] cc Romain PICHARD/Mondeville/VIC/[EMAIL PROTECTED], backuppc-users@lists.sourceforge.net backuppc-users@lists.sourceforge.net Objet Re: [BackupPC-users] Compression level Rich writes: I don't think BackupPC will update the pool

Re: [BackupPC-users] Compression

2006-05-11 Thread David Rees
On 5/11/06, Lee A. Connell [EMAIL PROTECTED] wrote: I noticed while monitoring backuppc that it doesn't seem to compress on the fly, is this true? I am backing up 40GB's worth of data on a server and as it is backing up I monitor the disk space usage on the mount point and by looking at that