On Thu, Nov 28, 2013 at 3:55 PM, zhifeng hu <z...@ancientrocklab.com> wrote:
> The repository growing fast, things get harder . Now the size reach several 
> GB, it may possible be TB, YB.
> When then, How do we handle this?
> If the transfer broken, and it can not be resume transfer, waste time and 
> waste bandwidth.
>
> Git should be better support resume transfer.
> It now seems not doing better it’s job.
> Share code, manage code, transfer code, what would it be a VCS we imagine it ?

You're welcome to step up and do it. On top of my head  there are a few options:

 - better integration with git bundles, provide a way to seamlessly
create/fetch/resume the bundles with "git clone" and "git fetch"
 - shallow/narrow clone. the idea is get a small part of the repo, one
depth, a few paths, then get more and more over many iterations so if
we fail one iteration we don't lose everything
 - stablize pack order so we can resume downloading a pack
 - remote alternates, the repo will ask for more and more objects as
you need them (so goodbye to distributed model)
-- 
Duy
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to