+ Duy, who tried resumable clone a few days/weeks ago

On Tue, Mar 1, 2016 at 5:30 PM, Josh Triplett <j...@joshtriplett.org> wrote:
> If you clone a repository, and the connection drops, the next attempt
> will have to start from scratch.  This can add significant time and
> expense if you're on a low-bandwidth or metered connection trying to
> clone something like Linux.
>
> Would it be possible to make git clone resumable after a partial clone?
> (And, ideally, to make that the default?)
>
> In a discussion elsewhere, Al Viro suggested taking the partial pack
> received so far,

ok,

> repairing any truncation,

So throwing away half finished stuff while keeping the front load?

> indexing the objects it
> contains, and then re-running clone and not having to fetch those
> objects.

The pack is not deterministic for a given repository. When creating
the pack, you may encounter races between threads, such that the order
in a pack differs.

> This may also require extending receive-pack's protocol for
> determining objects the recipient already has, as the partial pack may
> not have a consistent set of reachable objects.
>
> Before starting down the path of developing patches for this, does the
> approach seem potentially reasonable?

I think that sounds reasonable on a high level, but I'd expect it blows up
in complexity as in the receive-pack's protocol or in the code for having
to handle partial stuff.

Thanks,
Stefan

>
> - Josh Triplett
> --
> To unsubscribe from this list: send the line "unsubscribe git" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to