There has been a longer reply to a similar query on the git list 
( archive It covers the various 
config variable you may need.
- text pasted at bottom 
  ----- Original Message ----- 
  From: Philip Oakley 
  To: ; 
  Sent: Thursday, July 18, 2013 9:52 PM
  Subject: Re: [git-users] Git is consuming very much RAM


  Have you tried the bigFileThreshold config value which affects how the pack / 
delta compression works (by ignoring such big files thus relieving memory 
pressure), plus a bit of googling may help. 

  I think there are other similar config values related to the rest of the 
packing which will also affect memory pressure.

    ----- Original Message ----- 
    Sent: Tuesday, July 16, 2013 9:50 AM
    Subject: Re: [git-users] Git is consuming very much RAM

    As an addition to my answer, the repository contains to 99,99% just source 
code files

From: "Jeff King" <>
To: "Matt Schoen" <>
Cc: <>
Sent: Saturday, July 27, 2013 4:48 AM
Subject: Re: limit memory usage on large repositories

> On Wed, Jul 10, 2013 at 05:27:57PM -0500, Matt Schoen wrote:
>> I've been using git for some time now, and host my remote bare
>> repositories on my shared hosting account at  As a
>> protective feature on their shared host setup, they enact a policy
>> that kills processes that consume too much memory.  This happens to
>> git sometimes.
>> By "sometimes" I mean on large repos (>~500MB), when performing
>> operations like git gc and git fsck and, most annoyingly, when doing a
>> clone.  It seems to happen in the pack phase, but I can't be sure
>> exactly.
> Do you know how they measure the memory? One of the problems we've had
> at GitHub in measuring git's memory usage is that git will mmap the
> fairly large packfiles. This can bloat the RSS of the git process. At
> the same time, not counting the map is not quite right, either; it is
> memory the process is using, but it could stand to give up some of it if
> other processes needed it (and that giving up is managed by the kernel,
> not by git). So you end up in a situation where you may have a large RSS
> precisely _because_ there is no memory pressure on the system, which
> leaves the kernel free to leave the mmap'd pages in RAM.
> You can reduce the amount of memory you map at once with
> core.packedGitWindowSize.
>> I've messed around with the config options like pack.threads and
>> pack.sizeLimit, and basically anything on the git config manpage that
>> mentions memory.  I limit all of these things to 1 or 0 or 1m when
>> applicable, just to be sure. To be honest, I really don't know what
>> I'm doing ;)
> I assume you did pack.deltaCacheSize, which can take a fair bit of
> memory during packing (or cloning).
> Packing itself takes up a lot, as I think we keep the whole window's
> worth of objects in memory at one time (so 10 by default). If you have
> large objects, that can spike your memory usage for a moment as we keep
> several versions of the large object in memory at once.
> If you have such large objects that don't delta well, you can use the
> "nodelta" gitattribute so that git doesn't even try them.
>> Oddly enough, I'm having trouble reproducing my issue with anything
>> but git fsck.  Clones were failing in the past, but after a successful
>> git gc, everything seems to be ok(?)
> Memory usage for clone should improve after a gc, as we will mostly be
> reusing deltas from disk instead of trying to find new ones between
> packs.
> -Peff
> --

You received this message because you are subscribed to the Google Groups "Git 
for human beings" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
For more options, visit

Reply via email to