Jeff King <p...@peff.net> writes:

> I don't think there are. Most of Git's locks are predicated purely on
> the existence of the lockfile (with the intent that they'd work over
> systems like NFS). The gc lock is a weird one-off.
>
> And while it's not great for multiple gc's to run at the same time
> (because it wastes CPU), two of them running at the same time shouldn't
> cause a corruption. If you have a reproducible demonstration where that
> happens, I'd be very interested to see it.

Good point.

And come to think of it, gc "lock" does not have to be a lock to
begin with.  It is not "I am forbidding all of you guys from doing
gc, because that would break the result of gc _I_ am doing right
now", which is what we traditionally call "lock".  It is merely a
"We need to do this every once in a while and I am doing it now.  I
let others know that I am already doing so, so they do not have to
start the same thing right now" advisory.

And the code (i.e. lock_repo_for_gc()) allows the current process to
run when

 - "--force" option is given, or
 - the lockfile cannot be open()ed, or
 - the lockfile cannot be fstat()ed, or
 - the lockfile is older than 12 hours, or
 - the lockfile has malformed contents, or
 - the lockfile was taken on a host with the same name from ours,
   and a process with the same pid as recorded is not running.

Following the """12 hour limit is very generous as gc should never
take that long. On the other hand we don't really need a strict
limit here, running gc --auto one day late is not a big
problem. --force can be used in manual gc after the user verifies
that no gc is running.""" reasoning, I suspect that it shouldn't be
too bad even if we dropped the last condition (i.e. "is the process
still running?")  from the set of these conditions.

Reply via email to