Hi all,

I've bisected a performance regression (noticed by Quentin and myself)
which caused a 'git fetch' to go from ~1m30s to ~2m40s:

commit 47bf4b0fc52f3ad5823185a85f5f82325787c84b
Author: Jeff King <p...@peff.net>
Date:   Mon Jun 30 13:04:03 2014 -0400

    prepare_packed_git_one: refactor duplicate-pack check

Reverting this commit from a recent mainline master brings the time back
down from ~2m24s to ~1m19s.

The bisect log:

v2.8.1 -- 2m41s, 2m50s (bad)
v1.9.0 -- 1m39s, 1m46s (good) -- 2m40s -- 2m42s -- 1m27s -- 1m34s -- 2m39s -- 1m30s -- 2m29s
2.0.0.rc1.32.g5165dd5 -- 1m30s -- 1m32s -- 1m28s -- 2m25s -- 2m18s -- 1m36.542s

However, the commit found by 'git blame' above appears just fine to me,
I haven't been able to spot a bug in it.

A closer inspection reveals the problem to really be that this is an
extremely hot path with more than -- holy cow -- 4,106,756,451
iterations on the 'packed_git' list for a single 'git fetch' on my
repository. I'm guessing the patch above just made the inner loop
ever so slightly slower.

My .git/objects/pack/ has ~2088 files (1042 idx files, 1042 pack files,
and 4 tmp_pack_* files).

I am convinced that it is not necessary to rescan the entire pack
directory 11,348 times or do all 4 _BILLION_ memcmp() calls for a single
'git fetch', even for a large repository like mine.

I could try to write a patch to reduce the number of times we rescan the
pack directory. However, I've never even looked at the file before
today, so any hints regarding what would need to be done would be


(Cced some people with changes in the area.)


Reply via email to