Junio C Hamano <gits...@pobox.com> writes:
> I know that the 512MiB default for the bitFileThreashold (aka
> "forget about delta compression") came out of thin air. It was just
> "1GB is always too huge for anybody, so let's cut it in half and
> declare that value the initial version of a sane threashold",
> nothing more.
> So it might be that the problem is 512MiB is still too big, relative
> to the 16MiB of delta base cache, and the former may be what needs
> to be tweaked.
Well, the point with the 512MiB limit is basically that you can always
argue "if you are working with 500MiB files, you will not be using just
128MiB of main memory anyway". The 16MiB limit, in contrast, will get
utilized for basically every history, even one of small files, that's
> If a blob close to but below 512MiB is a problem for 16MiB delta base
> cache, it would still be too big to cause the same problem for 128MiB
> delta base cache---it would evict all the other objects and then end
> up not being able to fit in the limit itself, busting the limit
> immediately, no?
I think, but may be mistaken, that the delta base limit decides when to
flush whole blobs. So a 500MiB file would still be encoded relative to
a single newer version anyway and take memory accordingly.
But I am not sure about that at all.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html