> From: Dominik Rauch <dominik.rauch....@gmail.com>

> (c) the estimated upper limits to work with a repository in reasonable time 
> on a "normal" machine (i.e. if my repository reaches 20GB and all of a 
> sudden it takes five minutes per commit, etc.)

Most of the cases where this sort of behavior is seen come from
"thrashing", where the memory demand of the program exceeds the
available memory.  So you can look for these problems in advance by
asking "What situations cause Git to consume a great deal of memory?"

> From: Konstantin Khomoutov <flatw...@users.sourceforge.net>

> 1) Git always compresses objects it writes; and after a certain
>    threshold it compacts "loose" object files into the so-called
>    "packfiles" which are big indexed archives.
> 
>    What matters is that all these [de]compression operations are
>    performed "in core" -- that is, a file is slurped in, operated upon
>    then written out.  So you ought to have enough free physical memory
>    to do all of that.

However, you can limit the size of the pack files that Git will
generate through configuration:

[pack]
        packSizeLimit = 99m

There may be ways to force Git to not attempt to compress files that
are above a certain size.

Doing that will probably eliminate most situations where Git attempts
to consume excessive memory.

Dale

-- 
You received this message because you are subscribed to the Google Groups "Git 
for human beings" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to git-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to