Linus Torvalds wrote:

On Thu, 8 Sep 2005, Junio C Hamano wrote:

Yes, the reading of three trees upfront is probably the culprit
in your case

However, note that _most_ tree reading just reads one.

Merges may take half a second, and yes, when I did it, the fact that we move things around in the array is by far the highest cost. But the thing is, if merges take half a second, that's still not only damn good, it's not even the most common operation.

in my case the merges were taking significantly longer than a half second. making this change is certainly not worth it if merges are running fast...

Yes, the active_cache[] layout as one big array is inconvenient for read_tree(), which tends to want to interleave different trees in the array and thus forces things to be moved around.

But remember what the most common use for the index is - it's the "single tree" case from read_cache(). That's _so_ much more common than read_tree() that it's not even funny.

read_cache is fast as implemented. the issue is that the next thing that is often done is a cache insert, which requires a memmove, which is slow.

So the data structure is optimized for a different case than reading in trees. Big deal. That optimization is definitely worth it: it allows us to do the read_cache() with the actual index entries being totally read-only (a linked list would have to add a "next" pointer to the cache entries and not allow the in-place thing that read_cache() does).

they are still read-only with my linked list implementation.
fn:Chuck Lever
org:Network Appliance, Incorporated;Linux NFS Client Development
adr:535 West William Street, Suite 3100;;Center for Information Technology Integration;Ann Arbor;MI;48103-4943;USA
email;internet:[EMAIL PROTECTED]
title:Member of Technical Staff
tel;work:+1 734 763-4415
tel;fax:+1 734 763 4434
tel;home:+1 734 668-1089

Reply via email to