On Mon, 28 Apr 2014 21:16:40 +0200
Gergely Polonkai <gerg...@polonkai.eu> wrote:

> I’m trying to clone an SVN repository with around 48000 revisions,
> several branches and tags (svn://svn.zabbix.com). After a few
> thousands commits, Git failed (complaining something about sed, I
> haven’t wrote that down), so I svnrdumped the whole repository onto my
> filesystem.
> 
> After that, I used this command:
> 
> git svn clone -s --prefix=origin/ file://`pwd`/zabbix-svn zabbix-git/
> 
> to dump all the repo into a new Git repository. It ran for around 24
> hours to report an Out of memory error at the end. Although I realised
> since then that this is a somewhat bad idea, but I still don’t think
> it should happen this way. Or should it?

Do you have insanely huge files somewhere in those commits?
I'm speculating that the sole number of commits should not affect Git
that much but huge files *could* -- due to it using the xdelta
compression when packing files (which, in turn, in typical builds uses
memory mapping).

-- 
You received this message because you are subscribed to the Google Groups "Git 
for human beings" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to git-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to