I, too, was disappointed with rsync's performance when no changes were
required (23 minutes to verify that a system of about 3200 files was
identical). I wrote a little client/server python app which does the
verification, and then hands rsync the list of files to update. This reduced
the optimal
I was at first, but then removed it. The results were still insufficiently
fast.
Were you using the -c option of rsync? It sounds like you
were and it's
extremely slow. I knew somebody who once went to
extraordinary lengths to
avoid the overhead of -c, making a big patch to rsync to
On Thu, Nov 29, 2001 at 12:59:00PM -0600, Keating, Tim wrote:
I was at first, but then removed it. The results were still insufficiently
fast.
Were you using the -c option of rsync? It sounds like you
were and it's
extremely slow. I knew somebody who once went to
extraordinary
It seems to me the new options --read-batch and --write-batch should go
a long way towards reducing any time spent in creation of checksums and
file lists, so you should definitely give 2.4.7pre4 a try. This is just
a guess since I haven't actually used those options myself, but seems
worth
Keating, Tim [[EMAIL PROTECTED]] writes:
- If there's a mismatch, the client sends over the entire .checksum
file. The server does the compare and sends back a list of files to
delete and a list of files to update. (And now I think of it, it
would probably be better if the server just sent
Linux. Other files are ok, even files that are simular in size, I have a
700meg file that seems ok.
I'm aware of the 2 gig limit, but these files aren't close to 2gigs.
Note that I get very bad behavior with 2.4.7pre4, it doesn't even attempt
to copy, 2.4.6 just seems to fail at the end.
In my particular case, it is reasonable to assume that the size and
timestamp will change when the file is updated. (We are looking at it as a
patching mechanism.)
Right now it's actually using update time only, I should modify it to check
the file size as well.
Is there a way you could query
Keating, Tim [[EMAIL PROTECTED]] writes:
Is there a way you could query your database to tell you which
extents have data that has been modified within a certain timeframe?
Not in any practical way that I know of. It's not normally a major
hassle for us since rsync is used for a central
Hello Dave,
On Thu, 29 Nov 2001, Dave Dykstra wrote:
What version of rsync are you using? The cannot create message is coming
from receiver.c. Assuming you're using a released version of rsync and not
a development version, it appears that the mktemp library call is creating
the file and
On Thu, Nov 29, 2001 at 11:46:30PM +0100, Rok Krulec wrote:
Hello Dave,
On Thu, 29 Nov 2001, Dave Dykstra wrote:
What version of rsync are you using? The cannot create message is coming
from receiver.c. Assuming you're using a released version of rsync and not
a development version,
Hello Dave,
What version of sources is that which had mkstemp at line 121 of
syscall.c? It's surprising that you could just replace one with the other,
as mkstemp is supposed to open the file and mktemp is not supposed to. It
sounds like you have some inconsistent version of the sources.
On Thu, Nov 29, 2001 at 11:02:07AM -0500, Alberto Accomazzi wrote:
...
These numbers show that reading the filenames this way rather than using
the code in place to deal with the include/exclude list cuts the startup
time down to 0 (from 1hr). The actual sending of the filenames is down
from
On 29 Nov 2001, Jeremy Hansen [EMAIL PROTECTED] wrote:
Linux.
By the way when reporting bugs like this it is good to give a more
specific description, like RedHat 7.1 on x86.
Other files are ok, even files that are simular in size, I have a
700meg file that seems ok.
I'm aware of the
In message [EMAIL PROTECTED], Dave Dykstra writes:
On Thu, Nov 29, 2001 at 11:02:07AM -0500, Alberto Accomazzi wrote:
...
These numbers show that reading the filenames this way rather than using
the code in place to deal with the include/exclude list cuts the startup
time down to 0 (from
On Fri, 30 Nov 2001, Martin Pool wrote:
On 29 Nov 2001, Jeremy Hansen [EMAIL PROTECTED] wrote:
Linux.
By the way when reporting bugs like this it is good to give a more
specific description, like RedHat 7.1 on x86.
Sorry. This is a Red Hat 6.2 machine. 2.2.19 kernel. Both ends are
On 29 Nov 2001, Jeremy Hansen [EMAIL PROTECTED] wrote:
Sorry. This is a Red Hat 6.2 machine. 2.2.19 kernel. Both ends are the
same.
OK, thanks.
unexpected EOF in read_timeout
That error often means the ssh connection is failing.
The server is running rsyncd and client
On Fri, 30 Nov 2001, Martin Pool wrote:
On 29 Nov 2001, Jeremy Hansen [EMAIL PROTECTED] wrote:
rsync -avz --progress rsync://localhost/apache_logs/access_log .
tridge just reminded me that -a does *not* detect hardlinks. You need
-H too.
Hmm, there are no hard links involved here.
Actually, I did it this time without stunnel and got the same results:
Nov 29 16:49:49 rio rsyncd[24007]: transfer interrupted (code 20) at
rsync.c(229)
It's literally right at the end of the transfer. Somewhere in renaming
the temp file or something??
-jeremy
On Thu, 29 Nov 2001, Jeremy
On 29 Nov 2001, Dave Madole [EMAIL PROTECTED] wrote:
It also seems that at one point rsync wasn't recognizing that two of the
800M files were actually hard linked together, although in the same run it
did fine with smaller files (of the same name in different directories).
That's an
On 29 Nov 2001, Ian Kettleborough [EMAIL PROTECTED] wrote:
1. How much memory does each file to be copied need. Obvisiouly I have too many
files.
Hard to say exactly. On the order of a hundred bytes per file.
2. Why does this command work:
rsync -ax /usr/xx /backup/usr/
20 matches
Mail list logo