On Mon, Mar 25, 2013 at 01:01:59PM -0700, Junio C Hamano wrote:
> Jeff King <p...@peff.net> writes:
> > We _do_ see a problem during the checkout phase, but we don't propagate
> > a checkout failure to the exit code from clone. That is bad in general,
> > and should probably be fixed. Though it would never find corruption of
> > older objects in the history, anyway, so checkout should not be relied
> > on for robustness.
> It is obvious that we should exit with non-zero status when we see a
> failure from the checkout, but do we want to nuke the resulting
> repository as in the case of normal transport failure? A checkout
> failure might be due to being under quota for object store but
> running out of quota upon populating the working tree, in which case
> we probably do not want to.
I'm just running through my final tests on a large-ish patch series
which deals with this (among other issues). I had the same thought,
though we do already die on a variety of checkout errors. I left it as a
die() for now, but I think we should potentially address it with a
> > $ git init non-local && cd non-local && git fetch ..
> > remote: Counting objects: 3, done.
> > remote: Total 3 (delta 0), reused 3 (delta 0)
> > Unpacking objects: 100% (3/3), done.
> > fatal: missing blob object 'd95f3ad14dee633a758d2e331151e950dd13e4ed'
> > error: .. did not send all necessary objects
> > we do notice.
> Yes, it is OK to add connectedness check to "git clone".
That's in my series, too. Unfortunately, in the local clone case, it
slows down the clone considerably (since we otherwise would not have to
traverse the objects at all).
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html