Jeff King wrote:
> When we look up a sha1 object for reading, we first check
> packfiles, and then loose objects. If we still haven't found
> it, we re-scan the list of packfiles in `objects/pack`. This
> final step ensures that we can co-exist with a simultaneous
> repack process which creates a new pack and then prunes the
> old object.
I like the context above and what follows it, but I think you forgot
to mention what the patch actually does. :)
I guess it is:
However, in the first scan over refs in fetch-pack.c::everything_local,
this double-check of packfiles is not necessary since we are only
trying to get a rough estimate of the last time we fetched from this
remote repository in order to find good candidate common commits ---
a missed object would only result in a slightly slower fetch.
Avoid that slow second scan in the common case by guarding the object
lookup with has_sha1_file().
Sounds like it would not affect most fetches except by making them
a lot faster in the many-refs case, so for what it's worth I like it.
I had not read this codepath before. I'm left with a few questions:
* Why is 49bb805e ("Do not ask for objects known to be complete",
2005-10-19) trying to do? Are we hurting that in any way?
For the sake of an example, suppose in my stalled project I
maintain 20 topic branches against an unmoving mainline I do not
advertise and you regularly fetch from me. The cutoff is the
*newest* commit date of any of my topic branches you already have.
By declaring you have that topic branch you avoid a complicated
negotiation to discover that we both have the mainline. Is that
* Is has_sha1_file() generally succeptible to the race against repack
you mentioned? How is that normally dealt with?
* Can a slow operation get confused if an object is incorporated into
a pack and then expelled again by two repacks in sequence?
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html