[
https://issues.apache.org/jira/browse/YETUS-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16217523#comment-16217523
]
Allen Wittenauer commented on YETUS-561:
----------------------------------------
bq. Does it make sense to provide some kind of bounded default (4G?) rather
than unlimited?
I sort of mentioned this in my response to Sean, but just to clarify a bit:
* One of my basic goals that I personally have with Yetus while we're still
sub-1.0 is do reasonable defaults that work for the vast majority of people. I
don't want it to be Hadoop where the OOB experience is mostly ridiculously bad.
* if RAM limits (per process vs. entire container) are kernel configuration
based, then setting a default limit is going to be not only weighing one
configuration vs. another, but also a guess as to what that limit should
actually be. It gets bad quickly ("let's see, 2 executors, assume 4 unit
tests in parallel per executor, then we need some wiggle room, plus ....")...
and everyone is setting their own param anyway.
So I'm currently opting to punt on the question and let the current behavior
take precedent . :)
> Ability to limit Docker container's RAM usage
> ---------------------------------------------
>
> Key: YETUS-561
> URL: https://issues.apache.org/jira/browse/YETUS-561
> Project: Yetus
> Issue Type: New Feature
> Components: Test Patch
> Affects Versions: 0.6.0
> Reporter: Allen Wittenauer
> Assignee: Allen Wittenauer
> Priority: Critical
> Attachments: YETUS-561.00.patch
>
>
> Hadoop is blowing up nodes due to unit tests that consume all of RAM. In an
> attempt to keep nodes alive, Yetus needs the ability to put an upper limit on
> the amount that a Docker container can use.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)