Re: [PATCH/RFC 0/7] Multiple simultaneously locked ref updates

2013-08-29 Thread Martin Fick
On Thursday, August 29, 2013 08:11:48 am Brad King wrote:
 
fatal: Unable to create 'lock': File exists.
If no other git process is currently running, this
 probably means a git process crashed in this repository
 earlier. Make sure no other git process is running and
 remove the file manually to continue.

I don't believe git currently tries to do any form of stale 
lock recovery since it is racy and unreliable (both single 
server or on a multi-server shared repo),


-Martin
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/7] Multiple simultaneously locked ref updates

2013-08-29 Thread Brad King
On 08/29/2013 11:32 AM, Martin Fick wrote:
 On Thursday, August 29, 2013 08:11:48 am Brad King wrote:

fatal: Unable to create 'lock': File exists.
If no other git process is currently running, this
 probably means a git process crashed in this repository
 earlier. Make sure no other git process is running and
 remove the file manually to continue.
 
 I don't believe git currently tries to do any form of stale 
 lock recovery since it is racy and unreliable (both single 
 server or on a multi-server shared repo),

Nor should it in this case.  I was saying that the front-end
needs to reject duplicate ref names from the stdin lines before
trying to lock the ref twice to avoid this message.  I'm asking
for a suggestion for existing data structure capabilities in
Git's source to efficiently detect the duplicate name.

-Brad
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/7] Multiple simultaneously locked ref updates

2013-08-29 Thread Junio C Hamano
Brad King brad.k...@kitware.com writes:

 Nor should it in this case.  I was saying that the front-end
 needs to reject duplicate ref names from the stdin lines before
 trying to lock the ref twice to avoid this message.

How about trying not to feed duplicates?
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/7] Multiple simultaneously locked ref updates

2013-08-29 Thread Brad King
On 08/29/2013 12:21 PM, Junio C Hamano wrote:
 Brad King brad.k...@kitware.com writes:
 needs to reject duplicate ref names from the stdin lines before
 trying to lock the ref twice to avoid this message.
 
 How about trying not to feed duplicates?

Sure, perhaps it is simplest to push the responsibility on the user
to avoid duplicates.  However, the error message will need to be
re-worded to distinguish this case from a stale lock or competing
process since both locks may come from the same update-ref process.

Without checking the input for duplicates ourselves we cannot
distinguish these cases to provide a more informative error message.
However, such a check would add runtime overhead even for valid input.
If we prefer to avoid input validation then here is proposed new
wording for the lock failure message:


fatal: Unable to create 'lock': File exists.

The lock file may exist because:
- another running git process already has the lock, or
- this process already has the lock because it was asked to
  update the same file multiple times simultaneously, or
- a stale lock is left from a git process that crashed earlier.
In the last case, make sure no other git process is running and
remove the file manually to continue.


IIUC the message cannot say anything about a 'ref' because it is
used for other file type lock failures too.

Comments?
-Brad
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 0/7] Multiple simultaneously locked ref updates

2013-08-29 Thread Brad King
On 08/29/2013 02:07 PM, Junio C Hamano wrote:
 I didn't mean to force the caller of new update-ref --stdin; the
 new code you wrote for it is what feeds the input to update_refs()
 function, and that is one place you can make sure you do not lock
 yourself out.
 
 Besides, if you get two updates to the same ref from --stdin, you
 would need to verify these are identical (i.e. updating to the same
 new object name from the same old object name; otherwise the requests
 are conflicting and do not make sense), so the code to collect the
 requests from stdin needs to match potential duplicates anyway.
 
 One clean way to do so may be to put an update request (old and new
 sha1) in a structure, and use a string_list to hold list of refs,
 with the util field pointing at the update request data.
 
 - this process already has the lock because it was asked to
   update the same file multiple times simultaneously, or
 
 The second case is like left hand not knowing what right hand is
 doing, no?  It should not happen if we code it right.

Yes.  All of the above is what I originally intended when asking
the question in the cover letter.  Using string_list and its util
field may be useful.  However, see also my response at
$gmane/233260 about how it may fold into sorting.

Thanks for the reviews!
-Brad
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html