[EMAIL PROTECTED] schrieb am Wed, Dec 12, 2001 at 08:53:56AM +0100:
[...]
*
* 'Twas long ago when I did swap-on-a-file as shown above by you. Anyway, I
* succeeded both in upgrading RAM as well as adding an additional swapfile
* and it was confirmed indeed that memory was the problem: the
I'm interested in the *best* way to set particular permissions on the
machine I am rsync'ing content to.
These are the specifics of our setup:
1 client pushes to remote
2 remote machine = AIX running rsync daemon as chroot
3 client machine = NT running CYGWIN
4 directory
I have to post a correction to this:
* Why would symlinks eat up disk space?
Correction, of course they don't. I have to use hardlinks because every
a symbolic link does, in fact, consume disk space. Unlike a hard link,
(which is just another pointer to the files data, a symbolic link is a
On 12 Dec 2001, [EMAIL PROTECTED] wrote:
An additional hard link to an existing file takes only directory
space, which, if it's not enough of an addition to that directories
existing data to cause the filesystem driver to add another
allocation to the directories data space, takes up no more
[EMAIL PROTECTED] [[EMAIL PROTECTED]] writes:
It seems to me that this situation is common enough that the rsync
protocol should look for it as a special case. Once the protocol has
determined from differing timestamps and/or lengths that a file needs
to be synchronized, the receiver should
While potentially a useful option, you wouldn't want the protocol to
automatically always check for it, since it would preclude rsync on
This extension need not break any existing mechanism; if the hash of
the receiver's copy of the file doesn't match the start of the
sender's file, the protocol
I just confirmed that data corruption can occasionally occur with
rsync 2.5.0 when the -z option is used. My command was the following:
rsync -vaz --partial --block-size=65536 --checksum remote:/path/ /localdir
The files consisted of a year's worth of email (262MB), broken into
one file for
Does rsync support a user-defined compression level option (similar to
gzip)? I have read through the mailing list archive and could not find any
specific thread which addresses this question.
I would like to specify a medium level compression option (use more network
speed and less CPU
Well, I'll be damned. I'd never run into that trick. My apologies.
Tim Conway
[EMAIL PROTECTED]
303.682.4917
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, n9hmg on AIM
perl -e 'print pack(,
On 12 Dec 2001, [EMAIL PROTECTED] wrote:
While potentially a useful option, you wouldn't want the protocol to
automatically always check for it, since it would preclude rsync on
This extension need not break any existing mechanism; if the hash of
the receiver's copy of the file doesn't
On 12 Dec 2001, [EMAIL PROTECTED] wrote:
I just confirmed that data corruption can occasionally occur with
rsync 2.5.0 when the -z option is used.
Please keep the two directories that caused the problems, if they have
not already been overwritten.
Are you sure you're running 2.5.0 at both
Please keep the two directories that caused the problems, if they have
not already been overwritten.
They've been overwritten, but the problem is easy to recreate.
I did diff the correct and incorrect versions of the file. A whole
bunch of instances of the word for were turned into foF. Weird.
[EMAIL PROTECTED] [[EMAIL PROTECTED]] writes:
While potentially a useful option, you wouldn't want the protocol to
automatically always check for it, since it would preclude rsync on
This extension need not break any existing mechanism; if the hash of
the receiver's copy of the file doesn't
[EMAIL PROTECTED] [[EMAIL PROTECTED]] writes:
After I sent my note, I ran some more experiments and found the
problem goes away if I use the default checksum blocksize. So the
problem occurs *only* if I use a large blocksize (65536) *and* enable
compression.
Should have read ahead - this is
I just ran into the same corruption problem with 2.5.1pre3. Again, it
only happens when I use large checksum block sizes (65536) *and*
request compression (-z).
Because the corrupted file has a correct size and timestamp, I have to
re-run rsync with the --checksum option (and with either a
On 12 Dec 2001, [EMAIL PROTECTED] wrote:
I just ran into the same corruption problem with 2.5.1pre3. Again, it
only happens when I use large checksum block sizes (65536) *and*
request compression (-z).
My apologies, this fix went in after 2.5.1pre3. Would you please try
either using CVS
My apologies, this fix went in after 2.5.1pre3. Would you please try
either using CVS HEAD, or
./configure --disable-debug
Just tried that. (Yes, I rebuilt both ends). Still broken. Here's the
diff between a corrupted and correct version of my rsync mailing list
mbox:
bash-2.03$ diff
17 matches
Mail list logo