On Mon, Apr 18, 2005 at 01:34:41AM +0200, Petr Baudis wrote:
> No. The collision check is done in the opposite cache - when you want to
> write a blob and there is already a file of the same hash in the tree.
> So either the blob is already in the database, or you have a collision.
> Therefore, the cost is proportional to the size of what stays unchanged.

This is only true if we're calling update-cache on all unchanged files.
If that's what git is doing then we're in trouble anyway.

Remember that prior to the collision check we've already spent the
effort in

1) Compressing the file.
2) Computing a SHA1 hash on the result.

These two steps together (especially the first one) is much more
expensive than a file content comparison of the blob versus what's
already in the tree.

Somehow I have a hard time seeing how this can be at all efficient if
we're compressing all checked out files including those which are

Therefore the only conclusion I can draw is that we're only calling
update-cache on the set of changed files, or at most a small superset
of them.  In that case, the cost of the collision check *is* proportional
to the size of the change.

Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <[EMAIL PROTECTED]>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to