RE: only 1.3 of 12 GB will transfer from OSX to Linux

2004-07-30 Thread Wallace Matthews
I have done a lot of testing with single files as large as 85Gig and havent seen any significant issues. My testbed(s) are Debian Linux and Redhat Linux. The tests run as root. I have had performance issues with large files but nothing broke because of the size of the files. That is assuming that

possible writefd_unbuffered error; what am I screwing up this time

2004-06-18 Thread Wallace Matthews
I am trying to execute rsync manually at a remote server to test out --read-batch execution. I created the batch files on another server and then rcp'ed them to the remote server. I had some issues of not having the correct working directory on the remote system when I did an rsh remote rsync

RE: possible writefd_unbuffered error; what am I screwing up this time

2004-06-18 Thread Wallace Matthews
Thanks Wayne. That resulted in a successful execution. I can now go work out my rsh issue. wally -Original Message- From: Wayne Davison [mailto:[EMAIL PROTECTED] Sent: Friday, June 18, 2004 1:55 PM To: Wallace Matthews Cc: [EMAIL PROTECTED] Subject: Re: possible writefd_unbuffered error

RE: [Bug 1463] New: poor performance with large block size

2004-06-18 Thread Wallace Matthews
I applied your patch and it has resolved the problem. Thanks Craig -Original Message- From: Craig Barratt [mailto:[EMAIL PROTECTED] Sent: Thursday, June 17, 2004 11:48 PM To: Wallace Matthews Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: [Bug 1463] New: poor performance

RE: Suggested chnage to --partial usage.

2004-06-17 Thread Wallace Matthews
I am working with individual files that can be as large as 100 Gig and I exclusively use the push model. When there is a broken pipe (usually a time out or a temporary network problem) it would be nice if the local end could attempt to reopen the pipe and resume building the file. I know this

RE: [Bug 1463] New: poor performance with large block size

2004-06-17 Thread Wallace Matthews
I apologize to Craig. Chris is correct. I had been reading so many of Chris's highly intelligent e-mails that for some reason my brain ascribed the comment to Chris. But, the comment seems to have been right on. I have re-run the experiment with block sizes as small as 3000 (yes it took a long

RE: rsycnc copies all files

2004-06-17 Thread Wallace Matthews
I dont agree that you have to always use -c. I have done extensive testing without it and then repeated the tests with it to see how much load it puts on the servers. In my tests when I am not using -c, I send the resulting file back to the system that originated it but to a different directory

RE: how to exclude large files from backup list.

2004-06-16 Thread Wallace Matthews
It is the 55 function variety of Swiss Army Knife already and my boss is asking me to add yet another blade. :-) wally -Original Message- From: Chris Shoemaker [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 15, 2004 1:51 PM To: Wayne Davison Cc: [EMAIL PROTECTED] Subject: Re: how to

question

2004-06-14 Thread Wallace Matthews
I have 2 servers. Both have my home directory on a common file server mounted as /home/wally. I have 2.6.2 in my home directory in /home/wally/rsync/rsync-2.6.2 and I am doing a push from one file server to another. My command line has /home/wally/rsync/rsync-2.6.2/rsync as its command. I have

block check sum sizing

2004-06-14 Thread Wallace Matthews
When I dont specify --block-size but have --write-batch=xxx, I get a xxx.rsync_csum file that is 76 Kbytes in size. The size of the file varies as the size of the reference file is varied. --stats showed matched data that is roughly 6 block lengths based on the square root of the newer file. I

RE: question

2004-06-14 Thread Wallace Matthews
with a 2.5.7 remote system. I have updated all my servers to 2.6.2 and I will rerun my scripts. Hopefully, I will see improved results. wally -Original Message- From: Wayne Davison [mailto:[EMAIL PROTECTED] Sent: Monday, June 14, 2004 10:20 AM To: Wallace Matthews Cc: [EMAIL PROTECTED] Subject

stalling during delta processing

2004-06-14 Thread Wallace Matthews
I have a 29 Gig full backup on a remote server (lets call if fedor) that is called Kbup_1. I have a 1.3 Gig incremental backup on my local server. I have rsync 2.6.2 on both servers. Both are RedHat Linux 9.1 on i-686 hardware platforms. I issue the command time rsync -avv --rsh=rsh --stats

reporting a bug

2004-06-11 Thread Wallace Matthews
problem with --backup I am doing this the old fashioned way via e-mail. I deal with Bugzilla too much and would prefer to not open yet another bugzilla account that generates more e-mail that I dont need to sift through, especially when it looks very similar to in-house bugzilla stuff. I am

what am I doing wrong

2004-06-11 Thread Wallace Matthews
I am seeing some rather strange behavior with synch of 2 directories on the same system using 2.6.2. The older file is the image of a full backup and is 29Gig in size. The new image is a slice of an incremental backup and is 101Meg in size. the command line is: time

RE: what am I doing wrong

2004-06-11 Thread Wallace Matthews
case. My only purpose for using it was to create delta files that I could then send to remote system(s) to create incrementals. wally -Original Message- From: Chris Shoemaker [mailto:[EMAIL PROTECTED] Sent: Friday, June 11, 2004 10:31 AM To: Wallace Matthews Cc: [EMAIL PROTECTED

RE: Rsync for program loading on embedded systems

2004-06-02 Thread Wallace Matthews
I have been following this thread. I am working on rsync for an embedded application, but it has nothing to do with program loading. Donovan recently provided some formulas on figuring out the required checksum size relative to file size and acceptable failure rate. In the formulas, he

re: bwlimit=

2004-05-26 Thread Wallace Matthews
Wayne replied to my original note which said that in a special situation that I was using to probe rsync to build a behavioral model that bwlimit= resulted in bimodal behavior around a 4000 kbyte/sec value. He responded with a patch that I have tested in a limited way. I have a push scenario

question about --bwlimit=

2004-05-21 Thread Wallace Matthews
I am doing some benchmarking of rsync. I am using the --bwlimit= option to throttle down rsync to predict its operation over slow communications links. I am using rsync 2.6.2 from the release site without any patches. I downloaded the release rather than pull from the CVS tree. I have 2

re: question about --bwlimit=

2004-05-21 Thread Wallace Matthews
Since --bwlimit depends upon sleep(1 second), I repeated the experiment with a file that was 383 Megabyte so that when I am running unthrottled it takes significantly longer than a second (ie. ~50 seconds) to complete. I get the same bi-modal behavior but with different values for 4000 and 4001

How match.c hash_search works with multiple checksums that have identical tags

2004-01-26 Thread Wallace Matthews
I am trying to understand how match.c works. I am reading the code and something doesnt look quite right. This is usually a sign that I am missing something obvious. Here is what I see. build_hash_table uses qsort to order targets in ascending order of //tag,index// into the array of