On Fri, Jun 01, 2001 at 12:02:07PM -0700, Adam McKenna wrote:
> On Fri, Jun 01, 2001 at 08:44:18AM -0500, Dave Dykstra wrote:
> > There is really no way around that problem with rsync. Many other people
> > have tried to do similar things and the wisdom on the mailing list has
> > always been that rsync via cron is the wrong tool for applications that
> > have data that changes frequently and needs to be replicated very soon
> > after it changes. You need something more like a distributed database or a
> > replicating filesystem, or at the very least call rsync in a synchronous
> > manner right after a file has been changed and before the file can change
> > again.
>
> Actually, that's not our situation. These files will not be changing -- they
> just need to get from the FTP machine over to our application server after
> they've been uploaded.
Oh, then that's easier.
> Has anyone tried running rsync under djb's svscan?
I don't know what that is.
> I'm envisioning a run
> script that looks something like this:
>
> #!/bin/sh
> PATH=...
> find path -type f -mmin +0 -print > file-list
> rsync --exclude=/* --include-from=file-list ...
> sleep 60
You should be able to something like that. Put the --exclude after the
include though, they go by the first pattern they match. You could skip
doing anything if file-list is empty. Also keep in mind that if you use
an --exclude * that you need to explicitly include all parent directories
(the not-yet-implemented --files-from option would avoid that problem).
> The only problem in this scenario is if the rsync process somehow hangs, then
> it would never be restarted and would require manual intervention. Anyone
> have suggestions for this?
Use --timeout 120 or something like that to keep it from hanging.
- Dave Dykstra