No files this big (I have 8GB of RAM, about 5 available at the time), nor
close. To be somewhat precise, the biggest file I found is below 100MB,
although this is the current (latest) version. I will check the commit that
resulted in the error if that may contain one, and come back with the
result in the evening (CEST).
On 29 Apr 2014 12:32, "Konstantin Khomoutov" <flatw...@users.sourceforge.net>
> On Mon, 28 Apr 2014 21:16:40 +0200
> Gergely Polonkai <gerg...@polonkai.eu> wrote:
> > I’m trying to clone an SVN repository with around 48000 revisions,
> > several branches and tags (svn://svn.zabbix.com). After a few
> > thousands commits, Git failed (complaining something about sed, I
> > haven’t wrote that down), so I svnrdumped the whole repository onto my
> > filesystem.
> > After that, I used this command:
> > git svn clone -s --prefix=origin/ file://`pwd`/zabbix-svn zabbix-git/
> > to dump all the repo into a new Git repository. It ran for around 24
> > hours to report an Out of memory error at the end. Although I realised
> > since then that this is a somewhat bad idea, but I still don’t think
> > it should happen this way. Or should it?
> Do you have insanely huge files somewhere in those commits?
> I'm speculating that the sole number of commits should not affect Git
> that much but huge files *could* -- due to it using the xdelta
> compression when packing files (which, in turn, in typical builds uses
> memory mapping).
You received this message because you are subscribed to the Google Groups "Git
for human beings" group.
To unsubscribe from this group and stop receiving emails from it, send an email
For more options, visit https://groups.google.com/d/optout.