Kerin Millar <kerframil <at> fastmail.co.uk> writes:

> The need for the OOM killer stems from the fact that memory can be 
> overcommitted. These articles may prove informative:

> http://lwn.net/Articles/317814/

Yea I saw this article.  Its dated February 4, 2009. How much has
changed with the kernel/configs/userspace mechanism? Nothing, everything?


>
http://www.oracle.com/technetwork/articles/servers-storage-dev/oom-killer-1911807.html

Nice to know.

> In my case, the most likely trigger - as rare as it is - would be a 
> runaway process that consumes more than its fair share of RAM. 
> Therefore, I make a point of adjusting the score of production-critical 
> applications to ensure that they are less likely to be culled.

Ok I see the manual tools for OOM-killer. Are there any graphical tools
for monitoring, configuring, and control of OOM related files and target
processes? All of this performed by hand?


> If your cases are not pathological, you could increase the amount of 
> memory, be it by additional RAM or additional swap [1]. Alternatively, 
> if you are able to precisely control the way in which memory is 
> allocated and can guarantee that it will not be exhausted, you may elect 
> to disable overcommit, though I would not recommend it.

I do not have a problem. It keeps popping up in my clustering research,
frequently. Many of the clustering environments have heavy memory
requirements, so this will eventually be monitored, diagnosed and managed,
real time, in the cluser softwares, such as load balancing. These are
very new technologies, hence my need to understand both legacy current
issues and solutions. You cannot just always add resources. ONce set up
you have to dynamically manage resource consumption, or at least that
is what the current readings reveal.


> With NUMA, things may be more complicated because there is the potential 
> for a particular memory node to be exhausted, unless memory interleaving 
> is employed. Indeed, I make a point of using interleaving for MySQL, 
> having gotten the idea from the Twitter fork.

Well my first cluster is just (3) AMD-FX8350 with 32G ram each.
Once that is working, reasonably well, I'm sure I'll be adding
different (multi) processors to the mix, with differnt ram characteristis.
There is a *huge interest* in heterogenous clusters, including but
not limited to the GPU/APU hardware. So dynamic, real-time memory
managment is quintessentially important for successful clustering.
  

> Finally, make sure you are using at least Linux 3.12, because some 
> improvements have been made there [2].

yep, [1] I always set of gigs of swap and rarely use it, for critical
computations that must be fast. Many cluster folks are building
systems with both SSD and traditional (raid) HD setups. The SSD
could be partitioned for the cluster and swap. Lots of experimentation
on how best to deploy SSD with max_ram in systems for clusters is
ongoing.


Memory Management is a primary focus of Apache-Spark (in-memory)
computations. Spark can be use with Python, Java and Scala; so it is very cool. 


> --Kerin
> [1] At a pinch, additional swap may be allocated as a file
> [2] https://lwn.net/Articles/562211/#oom

(2) is also good to know.

thx,
James







Reply via email to