On Mon, Jan 21, 2013 at 11:42 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote: > > On Mon, Jan 21, 2013 at 11:18 AM, Sean Farley > <sean.michael.farley at gmail.com> wrote: >> >> On Mon, Jan 21, 2013 at 11:03 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote: >> > >> > On Mon, Jan 21, 2013 at 10:53 AM, Sean Farley >> > <sean.michael.farley at gmail.com> wrote: >> >> >> >> Well ? did you try this with the equivalent mercurial feature: >> >> largefiles? >> > >> > >> > Nope, feel free. Most of the speedup is independent of the large files >> > (which only change the git repo size from 78MB to 50MB). >> >> Righto. > > > Here's the clone without any "git-fat" business (18 seconds, versus 12 > seconds with git-fat): > > $ time git clone git at bitbucket.org:jedbrown/petsc-git > Cloning into 'petsc-git'... > remote: Counting objects: 300368, done. > remote: Compressing objects: 100% (66014/66014), done. > remote: Total 300368 (delta 233578), reused 300368 (delta 233578) > Receiving objects: 100% (300368/300368), 68.18 MiB | 10.22 MiB/s, done. > Resolving deltas: 100% (233578/233578), done. > 18.067 real 16.042 user 2.080 sys 100.30 cpu > $ du -hs petsc-git/.git > 77M petsc-git/.git
(Finally sending this out after sitting in my draft folder for far too long. Not that it matters anymore.) I think you're conflating the timings here. Let's first isolate network speed: $ time git clone -n https://bitbucket.org/jedbrown/petsc-git-lean petsc-git Cloning into 'petsc-git'... remote: Counting objects: 297100, done. remote: Compressing objects: 100% (67974/67974), done. remote: Total 297100 (delta 228357), reused 297100 (delta 228357) Receiving objects: 100% (297100/297100), 41.22 MiB | 873 KiB/s, done. Resolving deltas: 100% (228357/228357), done. real 1m6.176s user 0m7.732s sys 0m2.283s $ time hg clone --uncompressed -U https://bitbucket.org/petsc/petsc-dev petsc-dev streaming all changes 10150 files to transfer, 150 MB of data transferred 150 MB in 181.4 seconds (847 KB/sec) real 3m11.839s user 0m5.111s sys 0m5.308s $ du -sh petsc-git So, 1m6s for 50MB of data vs 3m11s for 150MB of data. Seems about spot on, there. $ time git co master Already on 'master' real 0m1.031s user 0m0.455s sys 0m0.462s $ time hg up 4209 files updated, 0 files merged, 0 files removed, 0 files unresolved real 0m0.987s user 0m0.001s sys 0m0.002s Unsurprisingly, they are both the same. The difference in actual repo size (not to be confused with the already committed binary files) will come down once the new bundle2 format is implemented.
