On 12-10-15 10:09 AM, Ævar Arnfjörð Bjarmason wrote:
> On Mon, Oct 15, 2012 at 11:14 AM, Angelo Borsotti
> <angelo.borso...@gmail.com> wrote:
> FWIW we have a lot of lemmings pushing to the same ref all the time at
> $work, and while I've seen cases where:
> 1. Two clients try to push
> 2. They both get the initial lock
> 3. One of them fails to get the secondary lock (I think updating the ref)
> I've never seen cases where they clobber each other in #3 (and I would
> have known from "dude, where's my commit that I just pushed" reports).
> So while we could fix git to make sure there's no race condition such
> that two clients never get the #2 lock I haven't seen it cause actual
> data issues because of two clients getting the #3 lock.
> It might still happen in some cases, I recommend testing it with e.g.
> lots of pushes in parallel with GNU Parallel.
Here's a previous discussion of a race in concurrent updates to the same ref,
even when the updates are all identical:
In that thread, Peff outlines the lock procedure for refs:
1. get the lock
2. check and remember the sha1
3. release the lock
4. do some long-running work (like the actual push)
5. get the lock
6. check that the sha1 is the same as the remembered one
7. update the sha1
8. release the lock
Angelo, in your case I think one of your concurrent updates would fail in
step 6. As you say, this is after the changes have been uploaded. However,
there's none of the file-overwriting that you fear, because the changes are
stored in git's object database under their SHA hashes. So there'll only be
an object-level collision if two parties upload the exact same object, in
which case it doesn't matter.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html