On 05/16/2014 03:13 AM, Duy Nguyen wrote:
> On Fri, May 16, 2014 at 2:06 AM, Philip Oakley <philipoak...@iee.org> wrote:
>> From: "John Fisher" <fishook2...@gmail.com>
>>> I assert based on one piece of evidence ( a post from a facebook dev) that
>>> I now have the worlds biggest and slowest git
>>> repository, and I am not a happy guy. I used to have the worlds biggest
>>> CVS repository, but CVS can't handle multi-G
>>> sized files. So I moved the repo to git, because we are using that for our
>>> new projects.
>>> keep 150 G of files (mostly binary) from tiny sized to over 8G in a
>>> version-control system.
> I think your best bet so far is git-annex
good, I am looking at that
> (or maybe bup) for dealing
> with huge files. I plan on resurrecting Junio's split-blob series to
> make core git handle huge files better, but there's no eta on that.
> The problem here is about file size, not the number of files, or
> history depth, right?
When things here calm down, I could easily test the repo without the giant
files, leaving 99% of files in the repo.
There is hardly any history depth because these are releases, version
controlled by directory name. As has been
suggested I could be forced to abandon the version-control, even to the point
of just using rsync. But I've been doing
this with CVS for 10 years now and I hate to change or in any way move away
fron KISS. Moving it to Git may not have
been one of my better ideas...
> Probably known issues. But some elaboration would be nice (e.g. what
> operation is slow, how slow, some more detail
> characteristics of the repo..) in case new problems pop up.
so far I have done add, commit, status, clone - commit and status are slow; add
seems to depend on the files involved,
clone seems to run at network speed.
I can provide metrics later, see above. email me offline with what you want.
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html