I have done a lot of testing with single files as large as 85Gig and havent seen
any significant issues. My testbed(s) are Debian Linux and Redhat Linux. The tests
run as root. I have had performance issues with large files but nothing broke because
of the size of the files. That is assuming that
I am trying to execute rsync manually at a remote server to test out --read-batch
execution.
I created the batch files on another server and then rcp'ed them to the remote
server. I had some issues of not having the
correct working directory on the remote system when I did an rsh remote rsync
Thanks Wayne. That resulted in a successful execution.
I can now go work out my rsh issue.
wally
-Original Message-
From: Wayne Davison [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 1:55 PM
To: Wallace Matthews
Cc: [EMAIL PROTECTED]
Subject: Re: possible writefd_unbuffered error
I applied your patch and it has resolved the problem.
Thanks Craig
-Original Message-
From: Craig Barratt [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 17, 2004 11:48 PM
To: Wallace Matthews
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: [Bug 1463] New: poor performance
I am working with individual files that can be as large as 100 Gig and I exclusively
use the push model. When there is a broken pipe (usually a time out or a temporary
network problem) it would be nice if the local end could attempt to reopen the pipe
and resume building the file. I know this
I apologize to Craig. Chris is correct.
I had been reading so many of Chris's highly intelligent e-mails that for some reason
my brain
ascribed the comment to Chris.
But, the comment seems to have been right on. I have re-run the experiment with block
sizes as small as 3000 (yes it took a long
I dont agree that you have to always use -c. I have done extensive testing without it
and then repeated the tests with it to see how much load it puts on the servers. In my
tests when I am not using -c, I send the resulting file back to the system that
originated it but to a different directory
It is the 55 function variety of Swiss Army Knife already and my boss is asking me to
add yet another blade. :-)
wally
-Original Message-
From: Chris Shoemaker [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 15, 2004 1:51 PM
To: Wayne Davison
Cc: [EMAIL PROTECTED]
Subject: Re: how to
I have 2 servers. Both have my home directory on a common file server mounted as
/home/wally. I have 2.6.2 in my home directory in
/home/wally/rsync/rsync-2.6.2 and I am doing a push from one file server to another.
My command line has /home/wally/rsync/rsync-2.6.2/rsync as its command. I have
When I dont specify --block-size but have --write-batch=xxx, I get a xxx.rsync_csum
file that is 76 Kbytes in size.
The size of the file varies as the size of the reference file is varied. --stats
showed matched data that is roughly 6 block lengths
based on the square root of the newer file.
I
with a 2.5.7 remote system.
I have updated all my servers to 2.6.2 and I will rerun my scripts. Hopefully, I will
see
improved results.
wally
-Original Message-
From: Wayne Davison [mailto:[EMAIL PROTECTED]
Sent: Monday, June 14, 2004 10:20 AM
To: Wallace Matthews
Cc: [EMAIL PROTECTED]
Subject
I have a 29 Gig full backup on a remote server (lets call if fedor) that is called
Kbup_1.
I have a 1.3 Gig incremental backup on my local server.
I have rsync 2.6.2 on both servers. Both are RedHat Linux 9.1 on i-686 hardware
platforms.
I issue the command time rsync -avv --rsh=rsh --stats
problem with --backup
I am doing this the old fashioned way via e-mail.
I deal with Bugzilla too much and would prefer to not open yet another bugzilla
account that generates more e-mail that I dont need to sift through, especially when
it looks very similar to in-house bugzilla stuff.
I am
I am seeing some rather strange behavior with synch of 2 directories on the same
system using 2.6.2.
The older file is the image of a full backup and is 29Gig in size. The new image is a
slice of an incremental
backup and is 101Meg in size.
the command line is:
time
case. My only
purpose for using it was to create delta files that I could then send to remote
system(s) to create incrementals.
wally
-Original Message-
From: Chris Shoemaker [mailto:[EMAIL PROTECTED]
Sent: Friday, June 11, 2004 10:31 AM
To: Wallace Matthews
Cc: [EMAIL PROTECTED
I have been following this thread.
I am working on rsync for an embedded application, but it has nothing to do with
program loading.
Donovan recently provided some formulas on figuring out the required checksum size
relative to file size and acceptable failure rate.
In the formulas, he
Wayne replied to my original note which said that in a special situation that I was
using to probe rsync to build a behavioral model that bwlimit= resulted in bimodal
behavior around a 4000 kbyte/sec value.
He responded with a patch that I have tested in a limited way. I have a push scenario
I am doing some benchmarking of rsync. I am using the --bwlimit= option to throttle
down rsync to predict its operation over slow communications links. I am using rsync
2.6.2 from the release site without any patches. I downloaded the release rather than
pull from the CVS tree.
I have 2
Since --bwlimit depends upon sleep(1 second), I repeated the experiment with a file
that was 383 Megabyte so that when I am running unthrottled it takes significantly
longer than a second (ie. ~50 seconds) to complete. I get the same bi-modal behavior
but with different values for 4000 and 4001
I am trying to understand how match.c works.
I am reading the code and something doesnt look quite right. This is usually a sign
that I am missing something obvious.
Here is what I see.
build_hash_table uses qsort to order targets in ascending order of //tag,index// into
the array of
20 matches
Mail list logo