Hello,
I installed BackupPC 3.0.0 on a Fedora Core 6 (server) in order to back up
Windows XP client.
For your information, I use rsyncd.
I'm checking and testing all differents compression levels to compare
them.
The test is based on 3.00 Gb in full backupc. There are more than 400
files to back up.
For the moment, I tested with the levels 1, 2, 3, 4, 5, 6 and without
compression (i.e. 0) and the difference
between the levels is very very small.
For example :
Without Compression : Time : 57.4 min / Size : 3055.1 Mo / Compression
: 0.0 %
Level 1 : Time : 63.0 min / Size (without compression) : 3055.1 Mo /
Size (after compression) : 2997.4 Mo / Compression : 1.9 %
Level 2 : Time : 65.9 min / Size (without compression) : 3055.1 Mo /
Size (after compression) : 2996.3 Mo / Compression : 1.9 %
Level 3 : Time : 58.0 min / Size (without compression) : 3055.1 Mo /
Size (after compression) : 2995.2 Mo / Compression : 2.0 %
Level 4 : Time : 59.2 min / Size (without compression) : 3055.1 Mo /
Size (after compression) : 2994.6 Mo / Compression : 2.0 %
Level 5 : Time : 62.8 min / Size (without compression) : 3055.1 Mo /
Size (after compression) : 2993.6 Mo / Compression : 2.0 %
Level 6 : Time : 59.3 min / Size (without compression) : 3055.1 Mo /
Size (after compression) : 2993.0 Mo / Compression : 2.0 %
Ok it's more and more compressed when the compression level increase but
it's very light.
Do you think that my tests are " normal " ?
Thanks for your help.
Have a nice day.
Romain
Holger Parplies <[EMAIL PROTECTED]>
Envoyé par : [EMAIL PROTECTED]
15/11/2007 11:24
A
dan <[EMAIL PROTECTED]>
cc
[email protected]
Objet
Re: [BackupPC-users] adjusting compression level
Hi,
dan wrote on 14.11.2007 at 22:17:46 [Re: [BackupPC-users] adjusting
compression level]:
> actually, i would consider it some level of bug that SMB can beat
> rsync in certain tasks in backuppc.
I think the thing you're misunderstanding is that SMB has a higher
transfer
rate simply *because* it transfers more data. If the backup takes the same
time (needed for scanning files, disk seeks ...), but you transfer twice
the
amount of data over the link, you get twice the transfer rate, right? Now,
is that a good thing or a bad thing?
Aside from that, rsync *definitely* has a computational overhead that may
or
may not affect transfer speed. Though it has been stated otherwise, I
still
believe that for local type backups (where link speed is *not* a limiting
factor), tar/SMB should be faster than rsync. The only question is whether
the system call interface is not already a limiting factor (meaning "local
type backups" do not exist :). If rsync checksum calculations (done in
user
space) were, in fact, faster than passing the data to kernel space for
transfer (and back) (and examining the data on the receiving end), then
even
localhost transfers could profit from rsync. But, again, time would be
spent
for calculation rather than data transfer, thus lowering the data rate
value
(while processing the same amount of data in the same or slightly less
time!).
> I would assume that most of the time, the
> client/server based rsync/rsyncd system would outperform the 2 layer
> SMB system of a transfer protocal and a seperate transfer program.
I believe you misunderstand SMB (or its usage within BackupPC).
> On Nov 14, 2007 8:56 PM, Gene Horodecki <[EMAIL PROTECTED]> wrote:
> > It would be nice if the compression, schedule, and transfer method
could be
> > set per directory, wouldn't it? Then I'd use samba and no compression
for
> > my bigger storage areas that probably won't be open, and
rsyncd/compression
> > for other areas..
You can define multiple hosts with a ClientNameAlias pointing to a single
real host and use different configs. You can include one set of
directories
with SMB and no compression in one and a different set with rsync and
compression in the other, if you really want to do that. It probably won't
be a problem in this case, but should be noted anyway:
> > >> On Nov 14, 2007, at 10:19 AM, Gene Horodecki wrote:
> > >> [...]
> > >> I think I was wrong about the rewriting the files. I reread the
docs
> > (what a
> > >> concept!) and it says that it's OK to change the compression value
after
> > >> things have already been written and it will do the right thing --
that
> > is
> > >> the hash will work since it's taken of the uncompressed version of
the
> > file.
you won't get the benefits of pooling between compressed and uncompressed
files (meaning if you've got the same file contents once in a compressed
backup
and once in an uncompressed one, it will be stored twice). This also means
that
if you *change* compression (i.e. switch it on or off), you will get new
copies of all files. Eventually, the backups done before the change will
expire, but until then you have duplicate data on your backup pool disk
(and
thus *much* higher storage requirements). If you only have one "test"
backup
done with compression and are now switching compression off, you will want
to
consider deleting the backup rather than waiting for it to expire.
Note that this is not true for changing *compression level*. If you have
compression *on* (i.e. level > 0), you can change the level (to another
value > 0) and the change will only affect newly compressed files. Ones
already in the pool will not be changed (or duplicated).
Finally, the compressed files already in your pool will not be compressed
again. With photos, I agree that compression has no benefit, unless you
want
to use rsync checksum caching. If you were talking about compressible and
largely static data, I'd point out that you pay the price for compression
mostly on the first backup. Subsequent backups - even full ones - should
be
significantly faster (and you could always lower the compression level to
1
for faster compression of new files).
I hope that's not too confusing, even though I left the flow of discussion
reversed.
Regards,
Holger
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/
SC2N -S.A Siège Social : 2, Rue Andre Boulle - 94000 Créteil - 327 153
722 RCS Créteil
"This e-mail message is intended only for the use of the intended
recipient(s).
The information contained therein may be confidential or privileged, and
its disclosure or reproduction is strictly prohibited.
If you are not the intended recipient, please return it immediately to its
sender at the above address and destroy it."
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/