[ 
https://issues.apache.org/jira/browse/YETUS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16218095#comment-16218095
 ] 

Allen Wittenauer commented on YETUS-561:
----------------------------------------

After even more playing around, this really should be set to the total 
container size.  I now have a better understanding of what is happening. It 
really is a cumulative memory count. It's still Russian roulette within the 
container, but in the case of something like a build, there aren't that many 
processes to actually pick from to kill off.

So 4GB might be too small of a limit.  I'm currently doing Hadoop builds at 5gb 
and it's hilariously slow as things get launched and then get killed.  But they 
_are_ working.

> Ability to limit Docker container's RAM usage
> ---------------------------------------------
>
>                 Key: YETUS-561
>                 URL: https://issues.apache.org/jira/browse/YETUS-561
>             Project: Yetus
>          Issue Type: New Feature
>          Components: Test Patch
>    Affects Versions: 0.6.0
>            Reporter: Allen Wittenauer
>            Assignee: Allen Wittenauer
>            Priority: Critical
>         Attachments: YETUS-561.00.patch
>
>
> Hadoop is blowing up nodes due to unit tests that consume all of RAM.  In an 
> attempt to keep nodes alive, Yetus needs the ability to put an upper limit on 
> the amount that a Docker container can use.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to