Jonathan Nieder <jrnie...@gmail.com> writes:

> Jeff King wrote:
>
>> When we look up a sha1 object for reading, we first check
>> packfiles, and then loose objects. If we still haven't found
>> it, we re-scan the list of packfiles in `objects/pack`. This
>> final step ensures that we can co-exist with a simultaneous
>> repack process which creates a new pack and then prunes the
>> old object.
>
> I like the context above and what follows it, but I think you forgot
> to mention what the patch actually does. :)
>
> I guess it is:
>
>       However, in the first scan over refs in fetch-pack.c::everything_local,
>       this double-check of packfiles is not necessary since we are only
>       trying to get a rough estimate of the last time we fetched from this
>       remote repository in order to find good candidate common commits ---
>       a missed object would only result in a slightly slower fetch.

It is not about a rough estimate nor common commits, though.  The
"everything local" check in question is interested in only one
thing: are we _clearly_ up to date without fetching anything from
them?

Loosening the check may miss the rare case where we race against a
simultaneous repack and will cause us to go to the network when we
do not have to, and it becomes a trade off between the common unracy
case going faster by allowing the "Are we clearly up to date" check
to cheat, at the expense of rare racy cases suffering unnecessary
object transfer overhead.

>       Avoid that slow second scan in the common case by guarding the object
>       lookup with has_sha1_file().

This conclusion is correct.

> I had not read this codepath before.  I'm left with a few questions:
>
>  * Why is 49bb805e ("Do not ask for objects known to be complete",
>    2005-10-19) trying to do?  Are we hurting that in any way?

An earlier fetch may have acquired all the necessary objects but may
not have updated our refs for some reason (e.g. fast-forward check
may have fired).  In such a case, we may already have a history that
is good (i.e. not missing paths down to the common history) in our
repository that is not connected to any of our refs, and we can
update our refs (or write to FETCH_HEAD) without asking the remote
end to do any common ancestor computation or object transfer.

That was the primary thing the patch wanted to do.

As a side-effect, we know more objects than just the objects at the
tips of our refs are complete and that may help the later common
history discovery step, but obviously we do not want to dig the
history down to root.  The cutoff value is merely a heuristics
chosen without any deep thought.

>  * Is has_sha1_file() generally succeptible to the race against repack
>    you mentioned?  How is that normally dealt with?

By failing to find, so that the user will restart.  When the caller
really wants to use the object, parse_objects() => read_sha1_file()
=> read_object() is used and we will see the retry.

>  * Can a slow operation get confused if an object is incorporated into
>    a pack and then expelled again by two repacks in sequence?

If it checks "the object should be there" first, wait for a long
time, and then tries to find that object's data, the later access
will go to the parse_objects() callpath and I think it should do the
right thing.  If that slow opearation stops inside read_object(), it
could find it unable to map the loose object file and then unable to
find it in the pack, either.  Is that what you are worried about?
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to