Jeff King <p...@peff.net> writes:

> When we read a sha1 file, we first look for a packed
> version, then a loose version, and then re-check the pack
> directory again before concluding that we cannot find it.
> This lets us handle a process that is writing to the
> repository simultaneously (e.g., receive-pack writing a new
> pack followed by a ref update, or git-repack packing
> existing loose objects into a new pack).
>
> However, we do not do the same trick with has_sha1_file; we
> only check the packed objects once, followed by loose
> objects. This means that we might incorrectly report that we
> do not have an object, even though we could find it if we
> simply re-checked the pack directory.

Hmm, would the same reasoning apply to sha1_object_info(), or do
existing critical code happen not to have a problematic calling
sequence like you noticed for repack?
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to