Sometimes I have to clone a repository over a slow / rather unreliable network and wonder whether there are any tools tricks to reduce the pain
for such cases.

I'm having two problems:

Problem 1: Resuming a clone:

When the connection gets cut during the clone, then I don't know how to continue / resume the clone. This is very annoying. After 20 minutes of cloning my connection is los. I reclone after 15 minutes the connection is lost. I reclone and after 25 minutes I succeed.

The only wrokaround, that I know would require full ssh access to the server. In fact it would be using rsync, (which can be resumed)
clone from the rsynced copy and thenmanually change the url of origin.

Problem 2: Full history but only required data
Very often I would only need a few versions (some branch heads / last week's commits) However git clone --depth seems to have far too many restrictions. (e.g. can't push from it)
So I wondered whether it would be possible to clone the entire history
of a repositry, but fetch only the data of a few specific commits.

This would reduce the transfer time massively as the repository may contine some huge binary files, which may change every few versions.

What would be great would be.
- download only the history (of course with a resume option if the connection got interrupted)
- then fetch the 'contents' for the version's I want to checkout
(ideally specify some commit ranges, but specifying some specific commit id's would be good enough for me)

Are there any helper tools/ plans which would allow such work flow?

To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to