David Kastrup <d...@gnu.org> writes:
> Duy Nguyen <pclo...@gmail.com> writes:
>> I can think of two improvements we could make, either increase cache
>> size dynamically (within limits) or make it configurable. If we have N
>> entries in worktree (both trees and blobs) and depth M, then we might
>> need to cache N*M objects for it to be effective. Christian, if you
>> want to experiment this, update MAX_DELTA_CACHE in sha1_file.c and
> Well, my optimized "git-blame" code is considerably hit by an
> aggressively packed Emacs repository so I took a look at it with the
> MAX_DELTA_CACHE value set to the default 256, and then 512, 1024, 2048.
> Trying with 16384:
> dak@lola:/usr/local/tmp/emacs$ time ../git/git blame src/xdisp.c >/dev/null
> real 2m8.000s
> user 0m54.968s
> sys 1m12.624s
> And memory consumption did not exceed about 200m all the while, so is
> far lower than what would have been available.
Of course, this has to do with delta_base_cache_limit defaulting to 16m.
> Something's _really_ fishy about that cache behavior. Note that the
> _system_ time goes up considerably, not just user time. Since the
> packs are zlib-packed, it's reasonable that more I/O time is also
> associated with more user time and it is well possible that the user
> time increase is entirely explainable by the larger amount of
> compressed data to access.
> But this stinks.
And an obvious contender for the stinking is that the "LRU" scheme used
here is _strictly_ freeing memory based on which cache entry has been
_created_ the longest time ago, not which cache entry has been
_accessed_ the longest time ago. Which means a pure round-robin
strategy for freeing memory rather than LRU.
Let's see what happens when changing this.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html