Re: Long clone time after "done."

2012-11-07 Thread Uri Moszkowicz
for each call but when you have thousands of them (one for each ref) it adds up. Adding --single-branch --branch doesn't appear to help as it is implemented afterwards. I would like to debug this problem further but am not familiar enough with the implementation to know what the next st

Re: Long clone time after "done."

2012-11-08 Thread Uri Moszkowicz
do to help debug this problem? On Thu, Nov 8, 2012 at 9:56 AM, Jeff King wrote: > On Wed, Nov 07, 2012 at 11:32:37AM -0600, Uri Moszkowicz wrote: > >> #4 parse_object (sha1=0xb0ee98 >> "\017C\205Wj\001`\254\356\307Z\332\367\353\233.\375P}D") at >> object.c:212 &

Re: Long clone time after "done."

2012-11-08 Thread Uri Moszkowicz
d ~37k tags since we used to tag every commit with CVS. All my tags are packed so cat-file doesn't work: fatal: git cat-file refs/tags/some-tag: bad file On Thu, Nov 8, 2012 at 2:33 PM, Jeff King wrote: > On Thu, Nov 08, 2012 at 11:20:29AM -0600, Uri Moszkowicz wrote: > >> I tri

Re: Long clone time after "done."

2012-11-08 Thread Uri Moszkowicz
. I ran "git gc --aggressive" before. On Thu, Nov 8, 2012 at 4:11 PM, Jeff King wrote: > On Thu, Nov 08, 2012 at 03:49:32PM -0600, Uri Moszkowicz wrote: > >> I'm using RHEL4. Looks like perf is only available with RHEL6. > > Yeah, RHEL4 is pretty ancient; I think i

Re: Long clone time after "done."

2012-11-08 Thread Uri Moszkowicz
[k] clear_page_c Does this help? Machine has 396GB of RAM if it matters. On Thu, Nov 8, 2012 at 4:33 PM, Jeff King wrote: > On Thu, Nov 08, 2012 at 04:16:59PM -0600, Uri Moszkowicz wrote: > >> I ran "git cat-file commit some-tag" for every tag. They seem to be >> ro

Re: Long clone time after "done."

2012-11-26 Thread Uri Moszkowicz
Hi guys, Any further interest on this scalability problem or should I move on? Thanks, Uri On Thu, Nov 8, 2012 at 5:35 PM, Uri Moszkowicz wrote: > I tried on the local disk as well and it didn't help. I managed to > find a SUSE11 machine and tried it there but no luck so I t

error: git-fast-import died of signal 11

2012-10-15 Thread Uri Moszkowicz
Hi, I'm trying to convert a CVS repository to Git using cvs2git. I was able to generate the dump file without problem but am unable to get Git to fast-import it. The dump file is 328GB and I ran git fast-import on a machine with 512GB of RAM. fatal: Out of memory? mmap failed: Cannot allocate memo

Re: error: git-fast-import died of signal 11

2012-10-15 Thread Uri Moszkowicz
/2012 11:53 AM, Uri Moszkowicz wrote: >> >> I'm trying to convert a CVS repository to Git using cvs2git. I was able to >> generate the dump file without problem but am unable to get Git to >> fast-import it. The dump file is 328GB and I ran git fast-import on a >> m

Re: error: git-fast-import died of signal 11

2012-10-16 Thread Uri Moszkowicz
;t think modifying the original repository or a clone of it is possible at this point but breaking up the import into a few steps may be - will try that next if this fails. On Tue, Oct 16, 2012 at 2:18 AM, Michael Haggerty wrote: > On 10/15/2012 05:53 PM, Uri Moszkowicz wrote: >> I'm

Re: error: git-fast-import died of signal 11

2012-10-17 Thread Uri Moszkowicz
Hi Michael, Looks like the changes to limit solved the problem. I didn't verify if it was the stacksize or descriptors but one of those. Final repository size was 14GB from a 328GB dump file. Thanks, Uri On Tue, Oct 16, 2012 at 2:18 AM, Michael Haggerty wrote: > On 10/15/2012 05:53

Unexpected directories from read-tree

2012-10-18 Thread Uri Moszkowicz
I'm testing out the sparse checkout feature of Git on my large (14GB) repository and am running into a problem. When I add "dir1/" to sparse-checkout and then run "git read-tree -mu HEAD" I see dir1 as expected. But when I add "dir2/" to sparse-checkout and read-tree again I see dir2 and dir3 appea

Re: Unexpected directories from read-tree

2012-10-19 Thread Uri Moszkowicz
uyen Thai Ngoc Duy wrote: > On Fri, Oct 19, 2012 at 6:10 AM, Uri Moszkowicz wrote: >> I'm testing out the sparse checkout feature of Git on my large (14GB) >> repository and am running into a problem. When I add "dir1/" to >> sparse-checkout and then run "

tag storage format

2012-10-22 Thread Uri Moszkowicz
I'm doing some testing on a large Git repository and am finding local clones to take a very long time. After some investigation I've determined that the problem is due to a very large number of tags (~38k). Even with hard links, it just takes a really long time to visit that many inodes. As it happ

Re: tag storage format

2012-10-23 Thread Uri Moszkowicz
That did the trick - thanks! On Mon, Oct 22, 2012 at 5:46 PM, Andreas Schwab wrote: > > Uri Moszkowicz writes: > > > Perhaps Git should switch to a single-file block text or binary format > > once a large number of tags becomes present in a repository. > > This is wh

Long clone time after "done."

2012-10-23 Thread Uri Moszkowicz
I have a large repository which I ran "git gc --aggressive" on that I'm trying to clone on a local file system. I would expect it to complete very quickly with hard links but it's taking about 6min to complete with no checkout (git clone -n). I see the message "Clining into 'repos'... done." appear

Large number of object files

2012-10-23 Thread Uri Moszkowicz
Continuing to work on improving clone times, using "git gc --aggressive" has resulted in a large number of tags combining into a single file but now I have a large number of files in the objects directory - 131k for a ~2.7GB repository. Any way to reduce the number of these files to speed up clones

Re: Long clone time after "done."

2012-10-24 Thread Uri Moszkowicz
It all goes to pack_refs() in write_remote_refs called from update_remote_refs(). On Tue, Oct 23, 2012 at 11:29 PM, Nguyen Thai Ngoc Duy wrote: > On Wed, Oct 24, 2012 at 1:30 AM, Uri Moszkowicz wrote: >> I have a large repository which I ran "git gc --aggressive" on that >