On 05/15/2014 12:06 PM, Philip Oakley wrote:
> From: "John Fisher" <fishook2...@gmail.com>
>> I assert based on one piece of evidence ( a post from a facebook dev)
>> that I now have the worlds biggest and slowest git
>> repository, and I am not a happy guy. I used to have the worlds
>> biggest CVS repository, but CVS can't handle multi-G
>> sized files. So I moved the repo to git, because we are using that
>> for our new projects.
>> keep 150 G of files (mostly binary) from tiny sized to over 8G in a
>> version-control system.
>> git is absurdly slow, think hours, on fast hardware.
>> any suggestions beyond these-
You could shard. Break the problem up into smaller repositories, eg via
submodules. Try ~128 shards and I'd expect that 129 small clones should
complete faster than a single 150G clone, as well as being resumable etc.
The first challenge will be figuring out what to shard on, and how to
lay out the repository. You could have all of the large files in their
own directory, and then the main repository just has symlinks into the
sharded area. In that case, I would recommend sharding by date of the
introduced blob, so that there's a good chance you won't need to clone
everything forever; as shards with not many files for the current
version could in theory be retired. Or, if the directory structure
already suits it, you could "directly" use submodules.
The second challenge will be writing the filter-branch script for this :-)
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html