Ronnie Sahlberg <> writes:

> Currently any locking of refs in a transaction only happens during the commit
> phase. I think it would be useful to have a mechanism where you could
> optionally take out locks for the involved refs early during the transaction.
> So that simple callers could continue using
> ref_transaction_begin()
> ref_transaction_create|update|delete()*
> ref_transaction_commit()
> but, if a caller such as walker_fetch() could opt to do
> ref_transaction_begin()
> ref_transaction_lock_ref()*
> stuff...
> ref_transaction_create|update|delete()*
> ref_transaction_commit()
> In this second case ref_transaction_commit() would only take out any locks 
> that
> are missing during the 'lock the refs" loop.
> Suggestion 1: Add a ref_transaction_lock_ref() to allow locking a ref
> early during
> a transaction.


I am not sure if that is the right way to go, or instead change all
create/update/delete to take locks without adding a new primitive.

> A second idea is to change the signatures for
> ref_transaction_create|update|delete()
> slightly and allow them to return errors early.
> We can check for things like add_update() failing, check that the
> ref-name looks sane,
> check some of the flags, like if has_old==true then old sha1 should
> not be NULL or 0{40}, etc.
> Additionally for robustness, if any of these functions detect an error
> we can flag this in the
> transaction structure and take action during ref_transaction_commit().
> I.e. if a ref_transaction_update had a hard failure, do not allow
> ref_transaction_commit()
> to succeed.
> Suggestion 2: Change ref_transaction_create|delete|update() to return an int.
> All callers that use these functions should check the function for error.

I think that is a very sensible thing to do.

The details of determining "this cannot possibly succeed" may change
(for example, if we have them take locks at the point of
create/delete/update, a failure to lock may count as an early

Is there any reason why this should be conditional (i.e. you said
"allow them to", implying that the early failure is optional)?

> Suggestion 3: remove the qsort and check for duplicates in
> ref_transaction_commit()
> Since we are already taking out a lock for each ref we are updating
> during the transaction
> any duplicate refs will fail the second attempt to lock the same ref which 
> will
> implicitly make sure that a transaction will not change the same ref twice.

I do not know if I care about the implementation detail of "do we
have a unique set of update requests?".  While I do not see a strong
need for one transaction to touch the same ref twice (e.g. create to
point at commit A and update it to point at commit B), I do not see
why we should forbid such a use in the future.

Just some of my knee-jerk reactions.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to
More majordomo info at

Reply via email to