Re: [BackupPC-users] Compression

2019-01-12 Thread Jan Stransky
3. Yes, there is certainly some confusion in client/host or host/server
naming schemes :-) Actually, I could imagine that the rsync compression
could be a reason for writing the custom perl version, which BackupPC
use: You just don't uncompress and store the already compressed file...
But I doubt this is the case :-)

4. I guess so... It does not seem to be a place to do the compression
then...

Jan

Dne 12.01.2019 v 17:14 Robert Trevellyan napsal(a):
> 3. Sorry, I think of the machines being backed up as clients, but
> BackupPC does call them hosts. rsync supports compressed transfers but
> that's not the scheme used for storage by BackupPC.
>
> 4. You may be thinking of the tasks that check for unreferenced files
> and recalculate the total pool size, which can both be split over
> multiple nightly runs.
>
> Robert Trevellyan
>
>
> On Sat, Jan 12, 2019 at 11:04 AM Jan Stransky
> mailto:jan.stransky.c...@gmail.com>> wrote:
>
> Hi Robert,
>
> 1-2) This is what I would expect, I am currious if there is a way
> to gradually compress the files; not all at once.
>
> 3) By the host, I meant host being backed up. And I am sure, it is
> not used for the compression, unless compress option of rsync is
> used. But I guess, this is uncompressed and compressed again on
> the server side. The think is that Pentium N3700 is not very
> efficient worker, however i7, which is backed up is different
> story
>
> 4) I think there is an option how big percentage of files is being
> checked or so; what about this one?
>
> Cheers,
>
> Jan
>
> Dne 12.01.2019 v 14:55 Robert Trevellyan napsal(a):
>> Hi Jan,
>>
>> I think this is correct, but there are other experts who might
>> chime in to correct me.
>>
>> 1. Migration will not result in compression of existing backups.
>> It just allows V4 to consume the V3 pool.
>>
>> 2. After compression is turned on, newly backed up files will be
>> compressed. Existing backups will remain as they were.
>>
>> 3. Host resources are always used for compression, either because
>> BackupPC is doing the work, or because the filesystem does it
>> natively. My BackupPC pools are on ZFS with LZ4 compression
>> enabled and BackupPC compression disabled.
>>
>> 4. The closest thing would be using rsync with --ignore-times for
>> full backups, which isn't quite the same thing, or having the
>> filesystem do it (e.g. ZFS scrub).
>>
>> Robert Trevellyan
>>
>>
>> On Sat, Jan 12, 2019 at 7:51 AM Jan Stransky
>> > > wrote:
>>
>> Hi,
>>
>> I have few questions related to compression
>>
>> Currently, I have BackupPC 3 installed on Intel NUC with 4
>> core pentium,
>> and since the compression significantly decreased backup
>> speeds, I have
>> turned it off. I am about to switch to v4, so it might be
>> worth to
>> reconsider, since the increments are not big, big data are
>> already in
>> the pool and v4 handles files differently.
>>
>> 1) If I start the migration with compression on, when would
>> it happen?
>> Would whole pool be compressed at once?
>>
>> 2) If the pool is uncompressed, and I turned it on, when will be
>> uncompressed files compressed? On full backup, or never unless ti
>> changes on the host?
>>
>> 3) Is there way to compress the files using the host resources?
>>
>> 4) Is integrity of the compressed file checked any time?
>>
>> Best  regards,
>>
>> Jan
>>
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> 
>> List:   
>> https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:    http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net 
>> 
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> 
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:

Re: [BackupPC-users] Compression

2019-01-12 Thread Robert Trevellyan
3. Sorry, I think of the machines being backed up as clients, but BackupPC
does call them hosts. rsync supports compressed transfers but that's not
the scheme used for storage by BackupPC.

4. You may be thinking of the tasks that check for unreferenced files and
recalculate the total pool size, which can both be split over multiple
nightly runs.

Robert Trevellyan


On Sat, Jan 12, 2019 at 11:04 AM Jan Stransky 
wrote:

> Hi Robert,
>
> 1-2) This is what I would expect, I am currious if there is a way to
> gradually compress the files; not all at once.
>
> 3) By the host, I meant host being backed up. And I am sure, it is not
> used for the compression, unless compress option of rsync is used. But I
> guess, this is uncompressed and compressed again on the server side. The
> think is that Pentium N3700 is not very efficient worker, however i7, which
> is backed up is different story
>
> 4) I think there is an option how big percentage of files is being checked
> or so; what about this one?
>
> Cheers,
>
> Jan
> Dne 12.01.2019 v 14:55 Robert Trevellyan napsal(a):
>
> Hi Jan,
>
> I think this is correct, but there are other experts who might chime in to
> correct me.
>
> 1. Migration will not result in compression of existing backups. It just
> allows V4 to consume the V3 pool.
>
> 2. After compression is turned on, newly backed up files will be
> compressed. Existing backups will remain as they were.
>
> 3. Host resources are always used for compression, either because BackupPC
> is doing the work, or because the filesystem does it natively. My BackupPC
> pools are on ZFS with LZ4 compression enabled and BackupPC compression
> disabled.
>
> 4. The closest thing would be using rsync with --ignore-times for full
> backups, which isn't quite the same thing, or having the filesystem do it
> (e.g. ZFS scrub).
>
> Robert Trevellyan
>
>
> On Sat, Jan 12, 2019 at 7:51 AM Jan Stransky 
> wrote:
>
>> Hi,
>>
>> I have few questions related to compression
>>
>> Currently, I have BackupPC 3 installed on Intel NUC with 4 core pentium,
>> and since the compression significantly decreased backup speeds, I have
>> turned it off. I am about to switch to v4, so it might be worth to
>> reconsider, since the increments are not big, big data are already in
>> the pool and v4 handles files differently.
>>
>> 1) If I start the migration with compression on, when would it happen?
>> Would whole pool be compressed at once?
>>
>> 2) If the pool is uncompressed, and I turned it on, when will be
>> uncompressed files compressed? On full backup, or never unless ti
>> changes on the host?
>>
>> 3) Is there way to compress the files using the host resources?
>>
>> 4) Is integrity of the compressed file checked any time?
>>
>> Best  regards,
>>
>> Jan
>>
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>
>
> ___
> BackupPC-users mailing listbackuppc-us...@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression

2019-01-12 Thread Jan Stransky
Hi Robert,

1-2) This is what I would expect, I am currious if there is a way to
gradually compress the files; not all at once.

3) By the host, I meant host being backed up. And I am sure, it is not
used for the compression, unless compress option of rsync is used. But I
guess, this is uncompressed and compressed again on the server side. The
think is that Pentium N3700 is not very efficient worker, however i7,
which is backed up is different story

4) I think there is an option how big percentage of files is being
checked or so; what about this one?

Cheers,

Jan

Dne 12.01.2019 v 14:55 Robert Trevellyan napsal(a):
> Hi Jan,
>
> I think this is correct, but there are other experts who might chime
> in to correct me.
>
> 1. Migration will not result in compression of existing backups. It
> just allows V4 to consume the V3 pool.
>
> 2. After compression is turned on, newly backed up files will be
> compressed. Existing backups will remain as they were.
>
> 3. Host resources are always used for compression, either because
> BackupPC is doing the work, or because the filesystem does it
> natively. My BackupPC pools are on ZFS with LZ4 compression enabled
> and BackupPC compression disabled.
>
> 4. The closest thing would be using rsync with --ignore-times for full
> backups, which isn't quite the same thing, or having the filesystem do
> it (e.g. ZFS scrub).
>
> Robert Trevellyan
>
>
> On Sat, Jan 12, 2019 at 7:51 AM Jan Stransky
> mailto:jan.stransky.c...@gmail.com>> wrote:
>
> Hi,
>
> I have few questions related to compression
>
> Currently, I have BackupPC 3 installed on Intel NUC with 4 core
> pentium,
> and since the compression significantly decreased backup speeds, I
> have
> turned it off. I am about to switch to v4, so it might be worth to
> reconsider, since the increments are not big, big data are already in
> the pool and v4 handles files differently.
>
> 1) If I start the migration with compression on, when would it happen?
> Would whole pool be compressed at once?
>
> 2) If the pool is uncompressed, and I turned it on, when will be
> uncompressed files compressed? On full backup, or never unless ti
> changes on the host?
>
> 3) Is there way to compress the files using the host resources?
>
> 4) Is integrity of the compressed file checked any time?
>
> Best  regards,
>
> Jan
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> 
> List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:    http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression

2019-01-12 Thread Robert Trevellyan
Hi Jan,

I think this is correct, but there are other experts who might chime in to
correct me.

1. Migration will not result in compression of existing backups. It just
allows V4 to consume the V3 pool.

2. After compression is turned on, newly backed up files will be
compressed. Existing backups will remain as they were.

3. Host resources are always used for compression, either because BackupPC
is doing the work, or because the filesystem does it natively. My BackupPC
pools are on ZFS with LZ4 compression enabled and BackupPC compression
disabled.

4. The closest thing would be using rsync with --ignore-times for full
backups, which isn't quite the same thing, or having the filesystem do it
(e.g. ZFS scrub).

Robert Trevellyan


On Sat, Jan 12, 2019 at 7:51 AM Jan Stransky 
wrote:

> Hi,
>
> I have few questions related to compression
>
> Currently, I have BackupPC 3 installed on Intel NUC with 4 core pentium,
> and since the compression significantly decreased backup speeds, I have
> turned it off. I am about to switch to v4, so it might be worth to
> reconsider, since the increments are not big, big data are already in
> the pool and v4 handles files differently.
>
> 1) If I start the migration with compression on, when would it happen?
> Would whole pool be compressed at once?
>
> 2) If the pool is uncompressed, and I turned it on, when will be
> uncompressed files compressed? On full backup, or never unless ti
> changes on the host?
>
> 3) Is there way to compress the files using the host resources?
>
> 4) Is integrity of the compressed file checked any time?
>
> Best  regards,
>
> Jan
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression benchmark

2017-02-01 Thread Les Mikesell
On Wed, Feb 1, 2017 at 2:53 AM, Jan Stransky
 wrote:
>
> 3) Full backup of each dataset as separate host, then second with
> already filled pool. Preferably from SSD to SSD to not be IO limited.
>

In practice if you use the --checksum-seed option with rsync the
timing of the 3rd full is the one that matters although it isn't a
good benchmark test because it will depend on the amount the target
changes.   The 2nd time an unchanged file is backed up the block
checksums are cached so the 3rd and subsequent runs do not need to
uncompress the archived copy for the rsync comparison to work.

-- 
   Les Mikesell
 lesmikes...@gmail.com

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Experiences?

2014-10-13 Thread John Rouillard
On Mon, Oct 13, 2014 at 06:24:10AM +0200, Christian Völker wrote:
 I remember having read about restoring single files from command line
 needs some BackupPC specific script or tricks to uncompress the files
 when using ocmpression for BackupPC.

I assume you mean using BackupPC_zcat.
 
 For a new instance I'm thinking of storing the files without compression
 to be able to easily restore them directly from command line if needed.
 
 Is there anyone out here who has some experience about the average
 compression ratio in BackupPC? I know it depends on the type of data-
 most of them are office documents (Word and OpenOffice).

If you already have a working backuppc instance, look at the page for
any host.  The bottom table give you the compression ratios. I have
some hosts that routinely have 80% compression (database
backups). Others have 0-2% (data directories with most of the data
already compressed).

-- 
-- rouilj

John Rouillard   System Administrator
Dyn Corporation  603-244-9084 (cell)

--
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://p.sf.net/sfu/Zoho
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] compression on quad core

2010-01-13 Thread Tino Schwarze
On Wed, Jan 13, 2010 at 09:25:47AM +0100, Thomas Scholz wrote:

 we using backuppc on an quad core system. Our backupprocess using only on 
 core 
 for poolcompression. Is there a way to get Compress::Zlib working 
 multithreaded?

You might want to run multiple backups in parallel... But AFAIK, there
is no widespread support for multithreaded zipping yet. I've just found
pigz recently.

HTH,

Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.tisc.de

--
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-11 Thread Tino Schwarze
On Tue, Nov 10, 2009 at 03:42:53PM -0800, Heath Yob wrote:

 Excellent it looks that fixed it.
 
 That's kinda lame you can't just change the TopDir.

Well it's a typical bootstrap problem. Where are you supposed to find
your configuration file if it's relative to ${TopDir}? Therefore
${TopDir} needs to be patched at certain places upon installation.

HTH,

Tino.

-- 
What we nourish flourishes. - Was wir nähren erblüht.

www.lichtkreis-chemnitz.de
www.tisc.de

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-11 Thread Les Mikesell
Tino Schwarze wrote:
 On Tue, Nov 10, 2009 at 03:42:53PM -0800, Heath Yob wrote:
 
 Excellent it looks that fixed it.

 That's kinda lame you can't just change the TopDir.
 
 Well it's a typical bootstrap problem. Where are you supposed to find
 your configuration file if it's relative to ${TopDir}? Therefore
 ${TopDir} needs to be patched at certain places upon installation.
 

And it is more of a packaging issue than a program problem.  If you install 
from 
the sourceforge code you get your choice of locations.  If you use a packaged 
version someone else has made this choice for you.   Since you normally want 
the 
archive on its own disk anyway, the simple approach would be to mount it at the 
package TopDir location before installing the package, but if it is your first 
install you probably won't know that yet.

-- 
Les Mikesell
 lesmikes...@gmail.com

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Matthias Meyer
Heath Yob wrote:

 It appears that I'm not getting any compression on my backups at least
 with my Windows clients.
 I think my mac clients are being compressed since it's actually
 stating a compression level in the host summary.
 
 I have the compression level set to 9.
 
 I have the Compress::Zlib perl library installed.
 
 ppo-backup:/home/heathy# perl -MCompress::Zlib -e print \Module
 installed.\\n\;
 Module installed.
 
 Is there a secret to SMB compression?
 
 Heath
 
I don't believe compression constraint on transport.
Do you have files in /var/lib/backuppc/pool?
Check your configuration:
grep CompressLevel /etc/backuppc/*.p

br
Matthias
-- 
Don't Panic


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Matthias Meyer wrote:
 Heath Yob wrote:
 
 It appears that I'm not getting any compression on my backups at least
 with my Windows clients.
 I think my mac clients are being compressed since it's actually
 stating a compression level in the host summary.

 I have the compression level set to 9.

 I have the Compress::Zlib perl library installed.

 ppo-backup:/home/heathy# perl -MCompress::Zlib -e print \Module
 installed.\\n\;
 Module installed.

 Is there a secret to SMB compression?

 Heath

 I don't believe compression constraint on transport.
 Do you have files in /var/lib/backuppc/pool?
 Check your configuration:
 grep CompressLevel /etc/backuppc/*.p

Keep in mind two things:
1) As above, there is no compression on the network level
2) Only new files not already stored in the pool will be compressed on disk.

To see if it is working, just see if there are any files being added to
the cpool folder:
du -sm $TopDir/cpool

Regards,
Adam

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkr5M6sACgkQGyoxogrTyiWlCQCgkFnTb0kdxKTB6LovPFsIkmtY
2Z0AoMzWobVJBW592173MpwVjfHU8m1t
=CV1D
-END PGP SIGNATURE-

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Heath Yob
According to my config.pl file : $Conf{CompressLevel} = '9';

So that's correct.

ppo-backup:/CLIENTBACKUPS# du -sh cpool/
12K cpool/
ppo-backup:/CLIENTBACKUPS# du -sm cpool/
1   cpool/

There's nothing in my cpool directory.

Thanks,

Heath

On Nov 10, 2009, at 1:34 AM, Adam Goryachev wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Matthias Meyer wrote:
 Heath Yob wrote:

 It appears that I'm not getting any compression on my backups at  
 least
 with my Windows clients.
 I think my mac clients are being compressed since it's actually
 stating a compression level in the host summary.

 I have the compression level set to 9.

 I have the Compress::Zlib perl library installed.

 ppo-backup:/home/heathy# perl -MCompress::Zlib -e print \Module
 installed.\\n\;
 Module installed.

 Is there a secret to SMB compression?

 Heath

 I don't believe compression constraint on transport.
 Do you have files in /var/lib/backuppc/pool?
 Check your configuration:
 grep CompressLevel /etc/backuppc/*.p

 Keep in mind two things:
 1) As above, there is no compression on the network level
 2) Only new files not already stored in the pool will be compressed  
 on disk.

 To see if it is working, just see if there are any files being added  
 to
 the cpool folder:
 du -sm $TopDir/cpool

 Regards,
 Adam

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iEYEARECAAYFAkr5M6sACgkQGyoxogrTyiWlCQCgkFnTb0kdxKTB6LovPFsIkmtY
 2Z0AoMzWobVJBW592173MpwVjfHU8m1t
 =CV1D
 -END PGP SIGNATURE-

 --
 Let Crystal Reports handle the reporting - Free Crystal Reports 2008  
 30-Day
 trial. Simplify your report design, integration and deployment - and  
 focus on
 what you do best, core application coding. Discover what's new with
 Crystal Reports now.  http://p.sf.net/sfu/bobj-july
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Les Mikesell
Heath Yob wrote:
 According to my config.pl file : $Conf{CompressLevel} = '9';
 
 So that's correct.
 
 ppo-backup:/CLIENTBACKUPS# du -sh cpool/
 12K   cpool/
 ppo-backup:/CLIENTBACKUPS# du -sm cpool/
 1 cpool/
 
 There's nothing in my cpool directory.

Does that /CLIENTBACKUPS directory mean that you've changed the location 
of the cpool or did you do an install-from-tarball there?  And are your 
logs full of can't link error messages?

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Heath Yob

I've changed the TopDir to /CLIENTBACKUPS.

pc and cpool directories are in there now.

I'm getting a bunch of errors like this on my PC clients:
2009-11-10 13:26:55 BackupPC_link got error -4 when calling MakeFileLink

Thanks,
Heath


On Nov 10, 2009, at 8:55 AM, Les Mikesell wrote:


Heath Yob wrote:

According to my config.pl file : $Conf{CompressLevel} = '9';

So that's correct.

ppo-backup:/CLIENTBACKUPS# du -sh cpool/
12K cpool/
ppo-backup:/CLIENTBACKUPS# du -sm cpool/
1   cpool/

There's nothing in my cpool directory.


Does that /CLIENTBACKUPS directory mean that you've changed the  
location
of the cpool or did you do an install-from-tarball there?  And are  
your

logs full of can't link error messages?

--
  Les Mikesell
   lesmikes...@gmail.com

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008  
30-Day
trial. Simplify your report design, integration and deployment - and  
focus on

what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression Issue

2009-11-10 Thread Les Mikesell
Heath Yob wrote:
 I've changed the TopDir to /CLIENTBACKUPS.
 
 pc and cpool directories are in there now.
 
 I'm getting a bunch of errors like this on my PC clients: 
 2009-11-10 13:26:55 BackupPC_link got error -4 when calling MakeFileLink

If you install from the tarball, there is a configuration step where you 
can set any location you want.  If you installed from a distribution 
package (RPM/deb), that step has already been done and you can't change 
the TopDir - but you can mount or symlink a replacement:

http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Change_archive_directory

-- 
   Les Mikesell
lesmikes...@gmail.com

--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression during Xfer

2008-10-28 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sebastien Sans wrote:
 Hello,
 
 The compression system of the pool in BackupPc is great, it save a lot
 of place, but I didn't found how to compress the tranfers in order to
 save my bandwidth.
 I tryed to modify the command line in rzync and tar modes to
 activate compression (i added -z options to use gz compression), the
 transfer was shorter but the result was'nt well recognized by
 BackupPc.
 I'm sure there is a simple way to compress the tranferts. If someone
 has the solution, that would be great.
 
 Cordially,
 
 Ps: Sorry for my bad english, I tryed to do my best...

With rsync the only way to do compression is by using ssh and getting
ssh to do the compression. (Or doing it over a VPN where the VPN
software compresses the data, or something similar).

You can't use rsync compression.

Regards,
Adam

- --
Adam Goryachev
Website Managers
Ph: +61 2 8304 [EMAIL PROTECTED]
Fax: +61 2 8304 0001www.websitemanagers.com.au
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFI7yghGyoxogrTyiURAjTyAJ0a8iKhsrpa+IvyNVocLfq9CW2/RgCdEXe4
paRUctyM87EQD7NQ1NT6sHk=
=zqoq
-END PGP SIGNATURE-

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression during Xfer

2008-10-13 Thread Carl Wilhelm Soderstrom
On 10/10 11:54 , Sebastien Sans wrote:
 The compression system of the pool in BackupPc is great, it save a lot
 of place, but I didn't found how to compress the tranfers in order to
 save my bandwidth.

Use compression in your ssh transport.
Here's an example I typically use:

$Conf{RsyncClientCmd} = '$sshPath -C -o CompressionLevel=9 -c blowfish-cbc -q 
-x -l rsyncbakup $host $rsyncPath $argList+';

The blowfish-cbc cipher has sometimes been demonstrated to be more efficient
than the default. I've not done enough comparisons to be completely
confident that it's the best tho.

On a completely independent note, that example shows I'm logging in as
'rsyncbakup' rather than 'root'. (I have the ssh key set up to use sudo to
run the single allowed rsync command on the client side as root).

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression during Xfer

2008-10-13 Thread Tomasz Chmielewski
Carl Wilhelm Soderstrom schrieb:
 On 10/10 11:54 , Sebastien Sans wrote:
 The compression system of the pool in BackupPc is great, it save a lot
 of place, but I didn't found how to compress the tranfers in order to
 save my bandwidth.
 
 Use compression in your ssh transport.
 Here's an example I typically use:
 
 $Conf{RsyncClientCmd} = '$sshPath -C -o CompressionLevel=9 -c blowfish-cbc -q 
 -x -l rsyncbakup $host $rsyncPath $argList+';

Unless you're using an obsoleted SSH protocol in version 1, setting 
CompressionLevel does not make any sense - SSH protocol 2 has 
compression level hardcoded to 6, and you can't change it.

And unless both machines (BackupPC server and the other side) are *very* 
loaded, changing cipher specification will not change anything, either.


-- 
Tomasz Chmielewski
http://wpkg.org

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression during Xfer

2008-10-13 Thread Carl Wilhelm Soderstrom
On 10/13 02:56 , Tomasz Chmielewski wrote:
  $Conf{RsyncClientCmd} = '$sshPath -C -o CompressionLevel=9 -c blowfish-cbc 
  -q -x -l rsyncbakup $host $rsyncPath $argList+';
 
 Unless you're using an obsoleted SSH protocol in version 1, setting 
 CompressionLevel does not make any sense - SSH protocol 2 has 
 compression level hardcoded to 6, and you can't change it.

ok. I might have been using CompressionLevel since SSH v1 days; or perhaps I
never experimented fully with that option. Thanks for the information.

 And unless both machines (BackupPC server and the other side) are *very* 
 loaded, changing cipher specification will not change anything, either.

In some cases they are; but thanks for the input on this.

-- 
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
www.real-time.com

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression during Xfer

2008-10-10 Thread Tomasz Chmielewski
Sebastien Sans schrieb:
 Hello,
 
 The compression system of the pool in BackupPc is great, it save a lot
 of place, but I didn't found how to compress the tranfers in order to
 save my bandwidth.
 I tryed to modify the command line in rzync and tar modes to
 activate compression (i added -z options to use gz compression), the
 transfer was shorter but the result was'nt well recognized by
 BackupPc.
 I'm sure there is a simple way to compress the tranferts. If someone
 has the solution, that would be great.

Unfortunately, BackupPC does not support any sort of transfer compression.

The best you can do is:
- do rsync transfers over SSH - SSH provides compression
- use VPN with compression, i.e. OpenVPN


-- 
Tomasz Chmielewski
http://wpkg.org


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread John Pettitt
Craig Barratt wrote:
 Rich writes:

   
 I don't think BackupPC will update the pool with the smaller file even
 though it knows the source was identical, and some tests I just did
 backing up /tmp seem to agree.  Once compressed and copied into the
 pool, the file is not updated with future higher compressed copies.
 Does anyone know something otherwise?
 

 You're right.

 Each file in the pool is only compressed once, at the current
 compression level.  Matching pool files is done by comparing
 uncompressed file contents, not compressed files.

 It's done this way because compression is typically a lot more
 expensive than uncompressing.  Changing the compression level
 will only apply to new additions to the pool.

 To benchmark compression ratios you could remove all the files
 in the pool between runs, but of course you should only do that
 on a test setup, not a production installation.

 Craig
   
The other point to keep in mind is that unless you actually need 
compression for disk space reasons leaving it off will often be faster 
on a CPU bound server.   Since there is a script provided 
(BackupPC_compressPool) to compress it later you can safely leave 
compression off until you need the disk space.

John

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread Rich Rauenzahn

John Pettitt wrote:
  

What happens is the newly transfered file is compared against candidates 
in the pool with the same hash value and if one exists it's just 
linked,   The new file is not compressed.   It seems to me that if you 
want to change the compression in the pool the way to go is to modify 
the BackupPC_compressPool script which compresses an uncompressed pool 
to instead re-compress a compressed pool.   There is some juggling that 
goes on to maintain the correct inode in the pool so all the links 
remain valid and this script already does that. 

  
You're sure?  That isn't my observation.  At least with rsync, the files 
in the 'new' subdirectory of the backup are already compressed, and I 
vaguely recall reading the code and noticing it compresses them during 
the transfer (but on the server side as it receives the data).  After 
the whole rsync session is finished, then the NewFiles hash list is 
compared with the pool.  Identical files (determined by hash code of 
uncompressed data) are then linked to the pool.


If that is all true, then it seems like there is an opportunity to 
compare the size of the existing file in the pool with the new file, and 
keep the smaller one.


Rich
-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-05 Thread John Pettitt
Rich Rauenzahn wrote:


 I know backuppc will sometimes need to re-transfer a file (for instance, 
 if it is a 2nd copy in another location.)  I assume it then 
 re-compresses it on the re-transfer, as my understanding is the 
 compression happens as the file is written to disk.(?)  

 Would it make sense to add to the enhancement request list the ability 
 to replace the existing file in the pool with the new file contents if 
 the newly compressed/transferred file is smaller?  I assume this could 
 be done during the pool check at the end of the backup... then if some 
 backups use a higher level of compression, the smallest version of the 
 file is always preferred (ok, usually preferred, because the transfer is 
 avoided with rsync if the file is in the same place as before.)

 Rich

   
What happens is the newly transfered file is compared against candidates 
in the pool with the same hash value and if one exists it's just 
linked,   The new file is not compressed.   It seems to me that if you 
want to change the compression in the pool the way to go is to modify 
the BackupPC_compressPool script which compresses an uncompressed pool 
to instead re-compress a compressed pool.   There is some juggling that 
goes on to maintain the correct inode in the pool so all the links 
remain valid and this script already does that. 

John


-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-04 Thread Rich Rauenzahn
[EMAIL PROTECTED] wrote:

 Hello,

 I would like to have an information about compression level.

 I'm still doing several tests about compression and I would like to 
 have your opinion about something :
 I think that there is a very little difference between level 1 and 
 level 9.
 I tought that I will be more.

 For example, with a directory (1GB - 1308 files : excel, word, pdf, 
 bmp, jpg, zip, ...) with compression level :

 9 I have the result : 54.4% compressed (1st size : 1018.4 Mo / 
 compressed size : 464.5 Mo)
 1 I have the result : 52.8% compressed (1st size : 1018.4 Mo / 
 compressed size : 480.5 Mo)

 Do you think that's correct / normal ?
I'll ask this again:  How are you ensuring that each compression test 
isn't reusing the compressed files that are already in the pool?  What 
is your test methodology?

I don't think BackupPC will update the pool with the smaller file even 
though it knows the source was identical, and some tests I just did 
backing up /tmp seem to agree.  Once compressed and copied into the 
pool, the file is not updated with future higher compressed copies.  
Does anyone know something otherwise?

Rich

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-04 Thread Craig Barratt
Rich writes:

 I don't think BackupPC will update the pool with the smaller file even
 though it knows the source was identical, and some tests I just did
 backing up /tmp seem to agree.  Once compressed and copied into the
 pool, the file is not updated with future higher compressed copies.
 Does anyone know something otherwise?

You're right.

Each file in the pool is only compressed once, at the current
compression level.  Matching pool files is done by comparing
uncompressed file contents, not compressed files.

It's done this way because compression is typically a lot more
expensive than uncompressing.  Changing the compression level
will only apply to new additions to the pool.

To benchmark compression ratios you could remove all the files
in the pool between runs, but of course you should only do that
on a test setup, not a production installation.

Craig

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression level

2007-12-04 Thread romain . pichard
Hello,

I'm sorry, I forgot to explain that I delete all files in the pool between 
two tests with different compression level.
And all links are cleaned because of BackupPC_nightly, etc...

It's on a test setup, of course.

Thanks a lot for your help.
Regards,

Romain




Craig Barratt [EMAIL PROTECTED] 
05/12/2007 08:00

A
Rich Rauenzahn [EMAIL PROTECTED]
cc
Romain PICHARD/Mondeville/VIC/[EMAIL PROTECTED], 
backuppc-users@lists.sourceforge.net 
backuppc-users@lists.sourceforge.net
Objet
Re: [BackupPC-users] Compression level






Rich writes:

 I don't think BackupPC will update the pool with the smaller file even
 though it knows the source was identical, and some tests I just did
 backing up /tmp seem to agree.  Once compressed and copied into the
 pool, the file is not updated with future higher compressed copies.
 Does anyone know something otherwise?

You're right.

Each file in the pool is only compressed once, at the current
compression level.  Matching pool files is done by comparing
uncompressed file contents, not compressed files.

It's done this way because compression is typically a lot more
expensive than uncompressing.  Changing the compression level
will only apply to new additions to the pool.

To benchmark compression ratios you could remove all the files
in the pool between runs, but of course you should only do that
on a test setup, not a production installation.

Craig



 
SC2N -S.A  Siège Social : 2, Rue Andre Boulle - 94000 Créteil  - 327 153 
722 RCS Créteil

 

This e-mail message is intended only for the use of the intended
recipient(s).
The information contained therein may be confidential or privileged, and
its disclosure or reproduction is strictly prohibited.
If you are not the intended recipient, please return it immediately to its
sender at the above address and destroy it.

-
SF.Net email is sponsored by: The Future of Linux Business White Paper
from Novell.  From the desktop to the data center, Linux is going
mainstream.  Let it simplify your IT future.
http://altfarm.mediaplex.com/ad/ck/8857-50307-18918-4___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Compression

2006-05-11 Thread David Rees

On 5/11/06, Lee A. Connell [EMAIL PROTECTED] wrote:


I noticed while monitoring backuppc that it doesn't seem to compress on the 
fly, is this
true?  I am backing up 40GB's worth of data on a server and as it is backing up 
I monitor
the disk space usage on the mount point and by looking at that information it 
doesn't
seem like compression is happening on the fly.

Does compression happen after the backup completes?


Whether or not compression is an option during the data transfer
depends on the transfer method. Currently, the only backup method
which supports compression over the network is ssh+rsync, and that
relies on ssh to do the compression. All others will send the data
over the network uncompressed.

Backuppc gets the data in uncompressed form, so it will compress the
data at that point if compression is enabled.

-Dave


---
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnkkid0709bid3057dat1642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/