On Wed, Feb 19, 2014 at 3:38 PM, Philippe Vaucher
<philippe.vauc...@gmail.com> wrote:
>> fwiw this is the thread that added --depth=250
>>
>> http://thread.gmane.org/gmane.comp.gcc.devel/94565/focus=94626
>
> This post is quite interesting:
> http://article.gmane.org/gmane.comp.gcc.devel/94637

Especially this part

-- 8< --
And quite frankly, a delta depth
of 250 is likely going to cause overflows in the delta cache (which is
only 256 entries in size *and* it's a hash, so it's going to start having
hash conflicts long before hitting the 250 depth limit).
-- 8< --

So in order to get file A's content, we go through its 250 level chain
(and fill the cache), then we get to file B and do the same, which
evicts nearly everything from A. By the time we go to the next commit,
we have to go through 250 levels for A again because the cache is
pretty much useless.

I can think of two improvements we could make, either increase cache
size dynamically (within limits) or make it configurable. If we have N
entries in worktree (both trees and blobs) and depth M, then we might
need to cache N*M objects for it to be effective. Christian, if you
want to experiment this, update MAX_DELTA_CACHE in sha1_file.c and
rebuild.

The other is smarter eviction, instead of throwing all A's cached
items out (based on recent order), keep the last few items of A and
evict B's oldest cached items. Hopefully by the next comit, we can
still reuse some cache for A and other files/trees. Delta cache needs
to learn about grouping to achieve this.
-- 
Duy
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to