Alex Janssen writes:
I've been using rsync to create backup copies of all my data files on my
Linux laptop to my Windows XP Home based desktop for about 6 months
now. Been working as it should, copying only files that changed since
the last backup. The first backup I ran after the time
Harry Putnam writes:
Yeah, nice write up. Am I correct in thinking that since I've gone
thru the long backup I'm now good till next time change?
Yes.
Further, if I converted the fs on the external drive to NTFS or create
an ext3 partitions, this would never have happened?
Yes.
Craig
--
Harry Putnam writes:
I've rsynced two directory structures back and forth a few times.
[snip]
The file systems in volved are (xp)NTFS on one end and Fat32 on the
external drive.
This is the DST problem with how Fat32 represents mtime.
Fat32 uses localtime, so the unix-derived (UTC) mtime
[EMAIL PROTECTED] writes:
Gang, I've read the manual(s), surfed google, spent about 5 hours on this,
to no avail
I'm trying to run rsync in server mode and it appears to start normally,
but it refuses all connections (refuses connection when I tried telnetting
in on localhost 873!).
Interesting ideas.
I envision the VFS Change Logger as a (hopefully very thin) middle-ware
that sits between the kernel's VFS interfaces and a real filesystem, like
ext3, reiser, etc. The VFS Change Logger will pass VFS calls to the
underlying filesystem driver, but it will make note of
Lars Karlslund writes:
Also the numbers speak for themselves, as the --whole-file option is
*way* faster than the block-copy method on our setup.
At the risk of jumping into the middle of this thread without
remembering everything that was discussed...
Remember that by default rsync writes a
jim writes:
Thanks for the additional info.
I actually have tried the --no-blocking-io option, but the sync
still hung.
Since no one on Unix-like platforms are reporting an issue, do you
think it may be something in the Cygwin compatibility layer?
Yes, I think so. When I tried to debug
Jose Luis Poza writes:
I have a problem witch cwsrync and a questions. Does cwrsync process
(rsync.exe) use 100% (more or less) CPU in Windows 2000 server witch a high
level of kernel usage ?
I have syncronized 11 servers (unix and windos) witch all their unit´s
files, that proccess during
Chris Heller writes:
I ran into a problem today when I tested the system for the first time.
I am rsyncing from a remote Linux host using the following options to
rsync: -avv --rsh=ssh stuff here --exclude-from=path to exclude
file --delete.
The problem is when the files are moved over to
GUZZI, ANTHONY writes:
Without a 'auth users' entry for a module, the sync go fine. With an 'auth
users' entry, I'm getting the '@ERROR: auth failed on module ' error
message.
Make sure your RSYNC-USERS.TXT file ends in a newline.
Rsync prior to 2.6.2 ignores the last line in the file
if
Steve Bonds writes:
This is what I would expect to see if the VXFS filesystem was not created
with the largefiles option-- but it was. (And I double-checked.) Other
utilities (e.g. dd) can create large files just fine.
I haven't seen anything obviously wrong with write_file or
Don Malloy writes:
I just tried the build from the nightly tar file:
rsync-HEAD-20040720-1929GMT.tar.gz
It failed at 2144075776 bytes each time I tried. I've attached the tail from
the tusc again. Here it the output of the rsync:
I haven't been following this thread, so I might be way
Paul Arch writes:
does anyone know if File::RsyncP will operate under activeperl (windows?)
This module is maintained by Craig Barratt, who I noticed is also on this
list :)
I haven't tested it under activeperl, but it does work under
perl + cygwin on WinXX.
Craig
--
To unsubscribe
Chris Shoemaker writes:
Do you see any reason to keep FIXED_CHECKSUM_SEED around? It doesn't
hurt anthing, but I don't see a use for it.
So long as the --checksum-seed=N option remains, I'm ok getting
rid of FIXED_CHECKSUM_SEED.
Craig
--
To unsubscribe or change options:
Wally writes:
I apologize to Craig. Chris is correct.
No problem.
I had been reading so many of Chris's highly intelligent e-mails...
Same here.
But, the comment seems to have been right on. I have re-run the
experiment with block sizes as small as 3000 (yes it took a long
time to
Wallace Matthews writes:
I copy the 29 Gig full backup back into fedor//test/Kibbutz and issue
the command time rsync -avv --rsh=rsh --stats --block-size=181272
/test/Kibbutz/Kbup_1 fedor://test/Kibbutz and it CRAWLS during delta
generation/transmittal at about 1 Megabyte per second.
I have
Wayne Davison writes:
On Sun, May 09, 2004 at 03:35:47AM -0700, Robert Helmer wrote:
If there is an error writing to the remote file due to a permission
denied error, rsync 2.6.1's client exits with an error code of 23, and
an informative error message.
... and no error message logged
[EMAIL PROTECTED] writes:
Have a look at
http://www.itefix.no/phpws/index.php?module=faqFAQ_op=viewFAQ_id=12
In short :
Right click My Computer Go to Properties Go to the Advanced Tab Click
Environment Variables In the bottom section (System variables), add the
new entry: CYGWIN, with
Wayne Davison writes:
On Sat, May 15, 2004 at 02:25:11PM -0700, Craig Barratt wrote:
Any feedback on this patch and the possibility of getting it
into CVS or the patches directory?
The file checksum-seed.diff was put into the patches dir on the 2nd of
May. Strangely, I don't seem
Wayne Davison writes:
On Sat, May 15, 2004 at 02:25:11PM -0700, Craig Barratt wrote:
Any feedback on this patch and the possibility of getting it
into CVS or the patches directory?
The file checksum-seed.diff was put into the patches dir on the 2nd of
May. Strangely, I don't seem
Any feedback on this patch and the possibility of getting it
into CVS or the patches directory?
Thanks,
Craig
-- Forwarded message --
To: jw schultz [EMAIL PROTECTED]
From: Craig Barratt [EMAIL PROTECTED]
cc: [EMAIL PROTECTED]
Date: Sat, 01 May 2004 17:06:10 -0700
Subject: Re
jw schultz writes:
There was some talk last year about adding a --fixed-checksum-seed
option, but no consensus was reached. It shouldn't hurt to make the
seed value constant for certain applications, though, so you can feel
free to proceed in that direction for what you're doing for
Agostino Russo writes:
I have a problem with rsync 2.6 protocol 27 (both client and server)
running over XP via Cygwin and sshd (on remote machine). It just hangs
almost randomly while transfering files after transfering a few
megabytes, not always on the same file. When the remote
Greger Cronquist writes:
I've used rsync successfully for several years, syncing between two
Windows 2000 servers using daemon mode, but today I stumbled accross
something peculiar. I'm using cygwin with rsync 2.6.0 at both ends (the
latest available at this date) and I have a file that
I recently installed and setup cwRsync on a Windows 2000 Server -
http://www.itefix.no/cwrsync/ -, and I was very impressed. I just
followed the instructions on the website and got it working.=20
I am using it to mirror 30Gb's of mailboxes everynight (only grabbing
the changes to each
Mauricio writes:
I can't believe this! I am having the very same problem I
had before. For those who do not remember, I was trying to rsync a
file from a Solaris 9 box(kushana) to a netbsd 1.6.1 (the rsync
server, katri) box, without much luck:
[EMAIL PROTECTED]rsync -vz \
?
jw schultz writes:
1. Yes, you may contribute a patch. I favor the idea of
being able to supply a checksum seed.
2. Lets get the option name down to a more reasonable
length. --checksum-seed should be sufficient.
I submitted a patch against 2.5.6pre1 last January for
Jason M. Felice writes:
This patch adds the --link-by-hash=DIR option, which hard links received
files in a link farm arranged by MD4 file hash. The result is that the system
will only store one copy of the unique contents of each file, regardless of
the file's name.
(rev 2)
* This
On Mon, Feb 09, 2004 at 09:14:06AM -0500, Jason M. Felice wrote:
I got the go-ahead from the client on my --link-by-hash proposal, and
the seed is making the hash unstable. I can't figure out why the seed
is there so I don't know whether to cirumvent it in my particular case
or calculate a
I just released version 2.0.0beta0 of BackupPC on SourceForge, see
http://backuppc.sourceforge.net/
What is BackupPC? It is an enterprise-grade open-source package for
backing up WinXX and *nix systems to disk. It supports transport via
SMB, tar and now rsync over rsh/ssh and rsyncd. The
I read in the archives that somebody has a faster binary version floating
around. How might I get ahold of it? (If you have it, would it be possible
to e-mail me a copy?)
Fetch 2.5.6 and apply the patch in patches/craigb-perf.diff before you
build it.
Craig
--
To unsubscribe or change
I wasn't aware that it had this. Was it there at the time of the
original discussion (Oct 2002)? The people involved in the discussion
then didn't seem to know this.
I wasn't aware of it in Oct 2002 during that discussion. I saw it in
the code a month or two after that. I haven't checked the
If I try to start rsync from command line it simply do nothig:
$ rsync --daemon
Administrator@dm-w2ks /usr/bin
$ ps
PIDPPIDPGID WINPID TTY UIDSTIME COMMAND
480 1 480480 con 500 04:15:03 /usr/bin/bash
1428 4801428
This problem may be discussed now, because in versions before
rsync-2.5.6, the algorithm for removing the so called duplicated files
was broken.
That's why we expect nobody used it anyway in earlier versions - but who
knows..
I agree it should be the last argument that wins, but as Wayne
Certanly, I tried --config
Could you tell me which rsync version do you use?
rsync 2.5.5 and rsync 2.5.6 both work fine for me.
Is it possible that rsync is already running as a service?
It won't show up in cygwin's ps. For example, when rsync
is running via cygrunsrv, if I type:
Is it possible to tell rsync to update the blocks of the target file=20
'in-place' without creating the temp file (the 'dot file')? I can=20
guarantee that no other operations are being performed on the file at=20
the same time. The docs don't seem to indicate such an option.
No, it's
I am rsyncing 1tb of data each day. I am finding in my testing that
actually removing the target files each day then rsyncing is faster than
doing a compare of the source-target files then rsyncing over the delta
blocks. This is because we have a fast link between the two boxes, and
James Kilton wrote:
To follow up on this... I found the --stats option and
here's what I'm getting:
Number of files: 36
Number of files transferred: 36
Total file size: 10200816 bytes
Total transferred file size: 10200816 bytes
Literal data: 10200816 bytes
Matched data: 0 bytes
I have several patches that I'm planning to check in soon (I'm waiting
to see if we have any post-release tweaking to and/or branching to do).
This list is off the top of my head, but I think it is complete:
And I have several things I would like to work on and submit:
- Fix the MD4 block
Is there any reason why caching programs would need to set the
value, rather than it just being a fixed value?
I think it is hard to describe what this is for and what it should be
set to. Maybe a --fixed-checksum-seed option would make some sense,
or for a caching mechanism to be built in
Block checksums come from the receiver so cached block
checksums are only useful when sending to a server which had
better know it has block checksums cached.
The first statement is true (block checksums come from the receiver),
but the second doesn't follow. I need to cover the case where
Has *anybody* been able to figure out a fix for this that really works?
Why does the receiving child wait in a loop to get killed, rather than
just exit()? I presume cygwin has some problem or race condition in the
wait loop, kill and wait_process().
The pipe to the parent will read 0 bytes
While the idea of rsyncing with compression is mildly
attractive i can't say i care for the new compression
format. It would be better just to use the standard gzip or
other format. If you are going to create a new file type
you could at least discuss storing the blocksums in it so
that
The following code in receiver.c around line 421 (2.5.6pre1) contains
some dead code:
/* we initially set the perms without the
setuid/setgid bits to ensure that there is no race
condition. They are then correctly updated after
the lchown. Thanks to [EMAIL
I have just released the first version 0.10 of File::RsyncP to SourceForge.
See:
http://perlrsync.sourceforge.net
File::RsyncP is a perl implementation of an Rsync client. It is
compatible with Rsync 2.5.5 (protocol version 26). It can send
or receive files, either by running rsync on the
Has anybody seen this? We want to seperate the statistics out from the
file list, and were using tail to grab the end of the file. the command
we run is:
rsync -r -a -z --partial --suffix=.backup --exclude=*.backup \
--stats -v /. 10.1.1.60::cds101/
Watch out for pagefile.sys (i think!)... it's won't copy. (let me know about
any other's)
Most important files won't copy. The registry files are locked and
can't be read by rsync/cygin (nor are they served by smb).
Similarly, the outlook.pst file used by Outlook (which contains all
the
I've been studying the read and write buffering in rsync and it turns
out most I/O is done just a couple of bytes at a time. This means there
are lots of system calls, and also most network traffic comprises lots
of small packets. The behavior is most extreme when sending/receiving
file deltas
1) have rsync understand that file names might have changed, maybe by
comparing files through their md5 signature instead of by their name,
that way rsync would see that /backup/syslog.198.gz is the same as
/var/log/syslog.197.gz and not retransfer it,
The best choice is to rename the syslog
can anybody help? what does tag 90 mean?
It looks like the sender and receiver are getting out of sync while
the file list is being sent. The data sent in blocks. Each block
starts with an 8 bit tag and a 24 bit length. The valid values of
the tag are 7,8,9,10. Any other value (eg: 90)
You haven't really provided enough data to even guess what
is limiting your performance.
How similar is the directory tree on the target (receiving)
machine? There are three general possibilities:
- It's empty.
- It's present, and substantially similar to the sending end.
- It's
SUN box, 2gig ram, hard drive space to spare. Rsync 2.5.5, solaris 5.7
version 7.
Half moon, I think it only seems to work on full moon nights.
Here's the command I run as well .
/usr/local/bin/rsync --delete --partial -P -p -z -e /usr/local/bin/ssh /dir1
systemname:/storage
[snip]
has anyone seen this error:
ns1: /acct/peter rsync ns1.pad.com::acct
overflow: flags=0xe8 l1=3 l2=20709376 lastname=.
ERROR: buffer overflow in receive_file_entry
rsync error: error allocating core memory buffers (code 22) at util.c(238)
ns1: /acct/peter
Either something is wrong with
I tried --block-size=4096 -c --block-size=4096 on 2 files (2.35 GB
2.71 GB) still had the same problem - rsync still needed to do a second
pass to successfully complete. These tests were between Solaris client AIX
server (both running rsync 2.5.5).
Yes, for 2.35GB there is a 92% chance,
Would you mind trying the following? Build a new rsync (on both
sides, of course) with the initial csum_length set to, say 4,
instead of 2? You will need to change it in two places in
checksum.c; an untested patch is below. Note that this test
version is not compatible with standard
terry I'm having a problem with large files being rsync'd twice
terry because of the checksum failing.
terry Is there a different checksum mechanism used on the second
terry pass (e.g., different length)? If so, perhaps there is an
terry issue with large files for what is used by default for
This is the first detailed description of the problem I've seen. I've heard
it mentioned several times before, and thought that the md4 code in librsync
was the same as in rsync. I've looked and tweaked the md4 code in librsync
and could never see the bug so I thought it was a myth. I also
57 matches
Mail list logo