On Tue, Jan 19, 2010 at 21:07, Kevin Grittner <kevin.gritt...@wicourts.gov> wrote: > I wrote: > >> Perhaps it is as simple, though, as using the client's time >> instead of the CVS server's time -- that's one of the things I've >> seen cause problems for this sort of thing using CVS before. > > I got a brief consult with a Ruby programmer here under the "if it's > less than ten minutes you don't have to schedule it through a > manager" rule. From what we can see, fromcvs scans for all entries > *after* a "previous run" time, but it isn't setting an upper bound > on time during the scan. I haven't found where it saves the time > for the lower limit of the next run, but I rather suspect that it > grabs the current time near the end of the scan. If this is an > accurate assessment, to avoid a window for lost commits, we'd have > to fix a time before we started the scan to use as the upper bound > for CVS commits to handle, and use it for the "previous run" time. > > There's still the possible issue of *whose* clock we're using for > this. > > Reality check: does the frequency of lost CVS commits within git > seem consistent with this theory?
Well, supposedly all our servers are synced with NTP. I know the main cvs server is, and the git server is, but it goes past the anoncvs server which is a hub.org server so I don't know for sure there - but I think it is? So I don't think it's the machines-out-of-sync issue. Or at least the window for that is *really* small. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers