Hi,

[Thorsten Glaser <t...@debian.org>, 2012-11-25 17:27]:
> On a multi-core machine, the garbage collection of git, as well
> as pack compression on the server side when someone clones a
> repository remotely, the compression is normally done automatically
> using multiple threads of execution.
> 
> That may be fine for your typical setups, but in my cases, I have
> two scenarios where it isn’t:
> 
> ⓐ The machine where I want it to use only, say, 2 of my 4 or 8 cores
>   as I’m also running some VMs on the box which eat up a lot of CPU
>   and which I don’t want to slow down.
 
> ⓑ The server VM which has been given 2 or 3 VCPUs to cope with all
>   the load done by clients, but which is RAM-constrained to only
>   512 or, when lucky, 768 MiB. It previously served only http/https
>   and *yuk* Subversion, but now, git comes into the play, and I’ve
>   seen the one server box I think about go down *HARD* because git
>   ate up all RAM *and* swap when someone wanted to update their clone
>   of a repository after someone else committed… well, an ~100 MiB large
>   binary file they shouldn’t.

unfortunately I can't really speak to the git side of things, but both
of these cases just sound like standard resource starvation. So why
don't you address them using the usual OS mechanisms? 

If you want to prevent git from sucking up CPU, nice(1) it, and if it
eats too much RAM, use the parent shell's ulimit mechanism.

Granted, this might also require some changes to git, but wouldn't that
be a simpler and more general approach to solving starvation problems?

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to