"V.Krishn" <vkris...@gmail.com> writes:
> On Friday, August 30, 2013 02:40:34 AM you wrote:
>> V.Krishn wrote:
>> > Quite sometimes when cloning a large repo stalls, hitting Ctrl+c cleans
>> > what been downloaded, and process needs re-start.
>> > Is there a way to recover or continue from already downloaded files
>> > during cloning ?
>> No, sadly. The pack sent for a clone is generated dynamically, so
>> there's no easy way to support the equivalent of an HTTP Range request
>> to resume. Someone might implement an appropriate protocol extension
>> to tackle this (e.g., peff's seed-with-clone.bundle hack) some day,
>> but for now it doesn't exist.
> This is what I tried but then realized something more is needed:
> During stalled clone avoid Ctrl+c.
> 1. Copy the content .i.e .git folder some other place.
> 2. cd <new dir>
> 3. git config fetch.unpackLimit 999999
> 4. git config transfer.unpackLimit 999999
These two steps will not help, as negotiation between the sender and
the receiver is based on the commits that are known to be complete,
and an earlier failed "fetch" will not (and should not) update refs
on the receiver's side.
>> What you *can* do today is create a bundle from the large repo
>> somewhere with a reliable connection and then grab that using a
>> resumable transport such as HTTP.
Another possibility is, if the project being cloned has a tag (or a
branch) that points at a commit back when it was smaller, do this
git init x &&
cd x &&
git fetch $that_repository $that_tag:refs/tags/back_then_i_was_small
to prime the object store of a temporary repository 'x' with a
hopefully smaller transfer, and then use it as a "--reference"
repository to the real clone.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html