At 10:32 30.06.2015, Dirk van Deun wrote:
>I used to rsync a /home with thousands of home directories every
>night, although only a hundred or so would be used on a typical day,
>and many of them have not been used for ages.  This became too large a
>burden on the poor old destination server, so I switched to a script
>that uses "find -ctime -7" on the source to select recently used homes
>first, and then rsyncs only those.  (A week being a more than good
>enough safety margin in case something goes wrong occasionally.)

Doing it this way you can't delete files that have disappeared or been

>Is there a smarter way to do this, using rsync only ?  I would like to
>use rsync with a cut-off time, saying "if a file is older than this,
>don't even bother checking it on the destination server (and the same
>for directories -- but without ending a recursive traversal)".  Now
>I am traversing some directories twice on the source server to lighten
>the burden on the destination server (first find, then rsync).

I would split up the tree into several sub trees and snyc them
normally, like /home/a* etc. You can then distribute the calls
over several days. If that is still too much then maybe to the
find call but then sync the whole user's home instead of just
the found files.

bye  Fabi

Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options:
Before posting, read:

Reply via email to