On Thu, Jun 20, 2013 at 11:09:49AM +0200, Michael Haggerty wrote:
> If the packed-refs file is already locked by another process (and there
> is no reason why that cannot be, and there is only one attempt to
> acquire the lock), then repack_without_ref() emits an error and returns
> with an error code. delete_ref() passes the error along, but doesn't
> restore the loose ref.
> I think this problem would also be fixed by the locking scheme that I
> proposed earlier : to acquire and hold the packed-refs lock across
> the modification of *both* files, and to rewrite the packed-refs file
> *before* deleting the loose-refs file (because rewriting the packed-refs
> file without the to-be-deleted reference is a logical NOP).
Yeah, I agree. You could also "roll back" the loose deletion, but I'd
rather just try to do it atomically.
I don't think this increases lock contention, since delete_ref would
need to lock the packed-refs file anyway. However, there is the related
change that we should probably lock the packed-refs file before checking
"is this ref in the packed-refs file?" in repack_without_ref. Which does
increase lock contention, but is more correct.
We should also consider deadlock issues. If the order is always "acquire
packed-refs lock, then acquire loose locks", we are fine. If this does
loose-then-packed, we are also fine with the current code, as
git-pack-refs does not prune the loose refs while under the packed-refs
lock. But I seem to recall discussion of pruning them under the
packed-refs lock, which would deadlock if repack_without_ref does
But I guess we do not actually block on locks, but rather just die (and
release our locks), so deadlock is not an option for us.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html