On Fri, Nov 22, 2013 at 01:28:59PM -0400, Joey Hess wrote:
> > Hrm. For --batch, I'd think we would open the whole object and notice
> > the corruption, even with the current code. But for --batch-check, we
> > use sha1_object_info, and for an "experimental" object, we do not need
> > to de-zlib the object at all. So we end up reporting whatever crap we
> > decipher from the garbage bytes. My patch would fix that, as we would
> > not incorrectly guess an object is experimental anymore.
> > If you have specific cases that trigger even after my patch, I'd be
> > interested to see them.
> I was seeing it with --batch, not --batch-check. Probably only with the
> old experimental loose object format. In one case, --batch reported a
> size of 20k, and only output 1k of data. With the object file I sent
> earlier, --batch reports a huge size, and fails trying to allocate the
> memory for it before it can output anything.
Ah, yeah, that makes sense. We report the size via sha1_object_info,
whether we are going to output the object itself or not. So we might
report the bogus size, not noticing the corruption, and then hit an
error and bail when sending the object itself.
My patch makes that better in some cases, because we'll notice more
corruption when looking at the header of the object for
sha1_object_info. But fundamentally, we may still hit an error while
outputting the bytes. Reading the cat-file code, it looks like we should
always die if we hit an error, so at least a reader will get premature
EOF (and not the beginning of another object).
I can believe there is some specific corruption that yields a valid zlib
stream that is a different size than the object advertises. Since
v1.8.4, we double-check that the size we advertised matches what we are
about to write. But the streaming-blob code path does not include that
check, so it might still be affected. It would be pretty easy and cheap
to detect that case.
In any code path where we call parse_object, we double-check that the
result matches the sha1 we asked for. But low-level commands like
cat-file just call read_sha1_file directly, and do not have such a
check. We could add it, but I suspect the processing cost would be
> I also have seen at least once a corrupt pack file that caused git to try
> and allocate a absurd quantity of memory.
I'm not surprised by that. The packfiles contain size information
outside of the checksummed zlib data, and we pre-allocate the buffer
before reading the zlib data. We could try to detect it, but then we are
hard-coding the definition of "absurd". The current definition is "we
asked the OS for memory, and it did not give it to us". :)
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html