https://bugzilla.redhat.com/show_bug.cgi?id=1623508
--- Comment #4 from Yaniv Kaul <[email protected]> --- (In reply to Nigel Babu from comment #2) > Root cause of this is that we've run more jobs in the last 2 weeks than we > normally do. This has blown away the space estimates that we had. For now > I've deleted archives older than 2 weeks. > > We need to do some clean up on that server so we have more free space. Once > that's done we'll attach more space. Leaving the bug open to track the > increased space. I wonder if VDO can help... I also think we need to look at shallow cloning: [ykaul@ykaul tmp]$ time git clone ssh://[email protected]/glusterfs Cloning into 'glusterfs'... remote: Counting objects: 2933, done remote: Finding sources: 100% (71/71) remote: Total 165164 (delta 0), reused 165119 (delta 0) Receiving objects: 100% (165164/165164), 89.17 MiB | 2.43 MiB/s, done. Resolving deltas: 100% (102537/102537), done. real 0m52.042s user 0m25.482s sys 0m1.876s [ykaul@ykaul tmp]$ du -ch glusterfs |grep total 124M total [ykaul@ykaul tmp]$ ls -lR glusterfs | wc -l 3764 [ykaul@ykaul tmp]$ time git clone --depth 1 ssh://[email protected]/glusterfs Cloning into 'glusterfs'... remote: Counting objects: 2486, done remote: Finding sources: 100% (2486/2486) remote: Total 2486 (delta 86), reused 1325 (delta 86) Receiving objects: 100% (2486/2486), 4.50 MiB | 1.56 MiB/s, done. Resolving deltas: 100% (86/86), done. real 0m10.380s user 0m0.603s sys 0m0.352s [ykaul@ykaul tmp]$ du -ch glusterfs |grep total 35M total [ykaul@ykaul tmp]$ ls -lR glusterfs | wc -l 3764 -- You are receiving this mail because: You are on the CC list for the bug. Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=8I7o15BjKr&a=cc_unsubscribe _______________________________________________ Gluster-infra mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-infra
