How to skip deletion of files which are newer on the receiver?

2010-06-29 Thread Mike Reiche
Hi.

I try to create a duplex sync between client/server via two rsync calls.

First step: Send new or updated files.

# rsync -a -c -r -z -u --progress --delete --files-from=rsync-list.txt $SOURCE 
--filter='. rsync-filter.txt' $TARGET/


Second step: Receive new or updates files.

# rsync -p -g -c -r -z -u --progress $TARGET/* $SOURCE


That works fine for most circumstances. But the first command deletes all files 
from the receiver, even if they are newer.
Changing the content (for differen checksum) or mtime of the receiver's file 
doesn't work. How to prevent that behavior?


Regards,
Mit freundlichen Grüßen,
Mike Reiche | Softwareentwicklung

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: How to skip deletion of files which are newer on the receiver?

2010-06-29 Thread Matt McCutchen
On Tue, 2010-06-29 at 11:04 +0200, Mike Reiche wrote: 
 I try to create a duplex sync between client/server via two rsync calls.

Don't do that.  Use unison (http://www.cis.upenn.edu/~bcpierce/unison/)
instead.

-- 
Matt


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


...failed: too many links (31)

2010-06-29 Thread Andrew Gideon

We do backups using rsync --link-dest.  On one of our volumes, we just 
hit a limit in ext3 which generated the error:

rsync: link ... = ... failed: Too many links (31)

This appears to be related to a limit in the number of directory entries 
to which an inode may be connected.  In other words, it's a limit on the 
number of hard links that can exist to a given file.  This limit is 
apparently 32000.

This isn't specifically an rsync problem, of course.  I can recreate it 
with judicious use of cp -Rl, for example.  But any site using --link-
dest as heavily as we are - and ext3 - is vulnerable to this.  So I 
thought I'd share our experience.

This is admittedly an extreme case: We've a lot of snapshots preserved 
for this volume.  And the files failing are under /usr/lib/locale; there 
is a lot of hardlinking already occurring in there.

I've thought of two solutions: (1) deliberating breaking linking (and 
therefore wasting disk space) or (2) using a different file system.

This is running on CentOS 5, so xfs was there to be tried.  I've had 
positive experiences with xfs in the past, and from what I have read this 
limit does not exist in that file system.  I've tried it out, and - so 
far - the problem has been avoided.  There are inodes with up to 32868 
links at the moment on the xfs copy of this volume.

I'm curious, though, what thoughts others might have.

I did wonder, for example, whether rsync should, when faced with this 
error, fall back on creating a copy.  But should rsync include behavior 
that exists only to work around a file system limit?  Perhaps only as a 
command line option (ie. definitely not the default behavior)?

Thanks...

- Andrew

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html