On 4/20/05, Linus Torvalds <[EMAIL PROTECTED]> wrote:
> It really _shouldn't_ be faster. It still does the compression, and throws
> the end result away.

Am I misunderstanding or is the proglem that doing:
<file with unknown status> -> compress -> sha1 -> compare with existing hash

is expensive?

What about doing:
<file it's supposed to be equal to> -> uncompress -> compare with
unknown status file

It's more file I/O, but the uncompress is much cheaper than the compress.

On a second issue, what's the format of the main 'index' file?  Is it:
<pathspec> <sha1hash>
<pathspec> <sha1hash> 
?
If so, that's not going to compress well.  A file like:
<pathspec1>
<pathspec2>

<sha1hash1>
<sha1hash2>

Will compress better.

Stop me if I'm way off base--I'm just following the mailing list, I
haven't tried out the code.

Cheers,
David
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to