On Thu, Oct 13, 2016 at 12:53:44PM -0400, Jeff King wrote:
> -- >8 --
> Subject: [PATCH] fetch: use "quick" has_sha1_file for tag following
A few comments on my own patch...
> This patch teaches fetch to use HAS_SHA1_QUICK to sacrifice
> accuracy for speed, in cases where we might be racy with a
> simultaneous repack. This is similar to the fix in 0eeb077
> (index-pack: avoid excessive re-reading of pack directory,
> 2015-06-09). As with that case, it's OK for has_sha1_file()
> occasionally say "no I don't have it" when we do, because
> the worst case is not a corruption, but simply that we may
> fail to auto-follow a tag that points to it.
Failing in this direction doesn't make me feel great. I had hoped it
would fail the _other_ way, and request an object that we might already
There are two alternatives that might be worth pursuing:
1. The thing that makes this really painful is the quadratic
duplicate-search in prepare_packed_git_one(). We could speed that
up pretty easily by keeping a hash of the packs indexed by their
names, and looking up in that.
That drops the quadratic factor, but it's still going to be
O(nr_tags * nr_packs), as we have to at the very least readdir()
all of objects/pack to see that nothing new showed up.
I wonder if we could trust the timestamp on the objects/pack
directory to avoid re-reading it at all. That turns it into a
single stat() call.
2. We could stop bothering to reprepare_packed_git() only after the
nth time it is called. This basically turns on the "sacrifice
accuracy for speed" behavior automatically, but it means that most
cases would never do so, because it wouldn't be creating a
performance issue in the first place.
It feels weird and flaky that git might behave differently based on
the number of unfetched tags the remote happens to have, though.
> Here are results from the included perf script, which sets
> up a situation similar to the one described above:
> Test HEAD^ HEAD
> 5550.4: fetch 11.21(10.42+0.78) 0.08(0.04+0.02) -99.3%
The situation in this perf script is obviously designed to show off this
one specific optimization. It feels a bit overly specific, and I doubt
anybody will be all that interested in running it again once the fix is
in. OTOH, I did want to document my reproduction steps, and this seemed
like the only reasonable place to do so. And as the perf suite is
already pretty expensive, perhaps nobody minds adding to it too much.