Hi, dan wrote on 14.11.2007 at 22:17:46 [Re: [BackupPC-users] adjusting compression level]: > actually, i would consider it some level of bug that SMB can beat > rsync in certain tasks in backuppc.
I think the thing you're misunderstanding is that SMB has a higher transfer rate simply *because* it transfers more data. If the backup takes the same time (needed for scanning files, disk seeks ...), but you transfer twice the amount of data over the link, you get twice the transfer rate, right? Now, is that a good thing or a bad thing? Aside from that, rsync *definitely* has a computational overhead that may or may not affect transfer speed. Though it has been stated otherwise, I still believe that for local type backups (where link speed is *not* a limiting factor), tar/SMB should be faster than rsync. The only question is whether the system call interface is not already a limiting factor (meaning "local type backups" do not exist :). If rsync checksum calculations (done in user space) were, in fact, faster than passing the data to kernel space for transfer (and back) (and examining the data on the receiving end), then even localhost transfers could profit from rsync. But, again, time would be spent for calculation rather than data transfer, thus lowering the data rate value (while processing the same amount of data in the same or slightly less time!). > I would assume that most of the time, the > client/server based rsync/rsyncd system would outperform the 2 layer > SMB system of a transfer protocal and a seperate transfer program. I believe you misunderstand SMB (or its usage within BackupPC). > On Nov 14, 2007 8:56 PM, Gene Horodecki <[EMAIL PROTECTED]> wrote: > > It would be nice if the compression, schedule, and transfer method could be > > set per directory, wouldn't it? Then I'd use samba and no compression for > > my bigger storage areas that probably won't be open, and rsyncd/compression > > for other areas.. You can define multiple hosts with a ClientNameAlias pointing to a single real host and use different configs. You can include one set of directories with SMB and no compression in one and a different set with rsync and compression in the other, if you really want to do that. It probably won't be a problem in this case, but should be noted anyway: > > >> On Nov 14, 2007, at 10:19 AM, Gene Horodecki wrote: > > >> [...] > > >> I think I was wrong about the rewriting the files. I reread the docs > > (what a > > >> concept!) and it says that it's OK to change the compression value after > > >> things have already been written and it will do the right thing -- that > > is > > >> the hash will work since it's taken of the uncompressed version of the > > file. you won't get the benefits of pooling between compressed and uncompressed files (meaning if you've got the same file contents once in a compressed backup and once in an uncompressed one, it will be stored twice). This also means that if you *change* compression (i.e. switch it on or off), you will get new copies of all files. Eventually, the backups done before the change will expire, but until then you have duplicate data on your backup pool disk (and thus *much* higher storage requirements). If you only have one "test" backup done with compression and are now switching compression off, you will want to consider deleting the backup rather than waiting for it to expire. Note that this is not true for changing *compression level*. If you have compression *on* (i.e. level > 0), you can change the level (to another value > 0) and the change will only affect newly compressed files. Ones already in the pool will not be changed (or duplicated). Finally, the compressed files already in your pool will not be compressed again. With photos, I agree that compression has no benefit, unless you want to use rsync checksum caching. If you were talking about compressible and largely static data, I'd point out that you pay the price for compression mostly on the first backup. Subsequent backups - even full ones - should be significantly faster (and you could always lower the compression level to 1 for faster compression of new files). I hope that's not too confusing, even though I left the flow of discussion reversed. Regards, Holger ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
