Hi Jan!

Thanks for your contribution. In your approach what happens when a few
containers on a node are using "excessive" memory (so that total memory
used > RAM available on the machine). Do you have overcommit enabled?

Thanks
Ravi

On Tue, Aug 9, 2016 at 1:31 AM, Jan Lukavský <jan.lukav...@firma.seznam.cz>
wrote:

> Hello community,
>
> I have a question about container resource calculation in nodemanager.
> Some time ago a filed JIRA https://issues.apache.org/jira/browse/YARN-4681,
> which I though might address our problems with container being killed
> because of read-only mmaping memory block. The JIRA has not been resolved
> yet, but it turned out for us, that the patch doesn't solve the problem.
> Some applications (namely Apache Spark) tend to allocate really large
> memory blocks outside JVM heap (using mmap, but with MAP_PRIVATE), but only
> for short time periods. We solved this by creating a smoothing resource
> calculator, which averages the memory usage of a container over some time
> period (say 5 minutes). This eliminates the problem of container being
> killed for short memory consumption peak, but in the same time preserves
> the ability to kill container that *really* consumes excessive amount of
> memory.
>
> My question is, does this seem a systematic approach to you and should I
> post our patch to the community or am thinking in a wrong direction from
> the beginning? :)
>
>
> Thanks for reactions,
>
>  Jan
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>

Reply via email to