[ 
https://issues.apache.org/jira/browse/YETUS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16217502#comment-16217502
 ] 

Allen Wittenauer commented on YETUS-561:
----------------------------------------

Agreed, because it's worse than just knocking over neighbors... I just had 
INFRA reboot about 5 nodes because they were hung.  (The Linux OOM killer is 
effectively modern day Russian roulette.)  Everything that normally runs on 
'Hadoop' labeled nodes is severely backed up.  Sorry world!  

I think the tricky part is going to be learning what to set the limit to be. I 
was thinking about setting a default, but there's just too many variables to 
make that sane until we know more.  For example, it's tempting to say 1/2 of 
RAM, but if it's actually per process, then that's way way way too much.  As 
the queue empties, I'll have more capabilities to experiment in real world 
conditions and give some recommendations.  For now, just exposing on the 
command line gives the basic support we need.

Note to myself and future readers:  
http://matthewkwilliams.com/index.php/2016/03/17/docker-cgroups-memory-constraints-and-java-cautionary-tale/
 is particularly interesting. 

> Ability to limit Docker container's RAM usage
> ---------------------------------------------
>
>                 Key: YETUS-561
>                 URL: https://issues.apache.org/jira/browse/YETUS-561
>             Project: Yetus
>          Issue Type: New Feature
>          Components: Test Patch
>    Affects Versions: 0.6.0
>            Reporter: Allen Wittenauer
>            Assignee: Allen Wittenauer
>            Priority: Critical
>         Attachments: YETUS-561.00.patch
>
>
> Hadoop is blowing up nodes due to unit tests that consume all of RAM.  In an 
> attempt to keep nodes alive, Yetus needs the ability to put an upper limit on 
> the amount that a Docker container can use.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to