[
https://issues.apache.org/jira/browse/IGNITE-2419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15110539#comment-15110539
]
ASF GitHub Bot commented on IGNITE-2419:
----------------------------------------
GitHub user DoudTechData opened a pull request:
https://github.com/apache/ignite/pull/414
IGNITE-2419 manage memory overhead in resource requests to YARN
implements memory overhead property in ClusterProperties.
use property for requesting memory in ApplicationMaster.
Simplify/factorize properties loading in ClusterProperties
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/DoudTechData/ignite jira-2419
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/ignite/pull/414.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #414
----
commit 38c17b70a9d7b419e34544d9771ebfc12e534791
Author: Edouard Chevalier <[email protected]>
Date: 2016-01-21T12:33:31Z
implements memory overhead and factorize properties loading.
----
> Ignite on YARN do not handle memory overhead
> --------------------------------------------
>
> Key: IGNITE-2419
> URL: https://issues.apache.org/jira/browse/IGNITE-2419
> Project: Ignite
> Issue Type: Bug
> Components: hadoop
> Environment: hadoop cluster with YARN
> Reporter: Edouard Chevalier
> Assignee: Edouard Chevalier
> Priority: Critical
> Fix For: 1.6
>
>
> When deploying ignite nodes with YARN, JVM are launched with a defined amount
> of memory (property IGNITE_MEMORY_PER_NODE transposed to the "-Xmx" jvm
> property) and YARN is told to provide container that would require exactly
> that amount of memory. But YARN monitors the memory of the overall process,
> not the heap: JVM can easily requires more memory than the heap (VM and/or
> native overheads, threads overhead, and in the case of ignite, possibly
> offheap data structures). If tasks require all of the heap, the process
> memory would be more far more than the heap memory. The YARN then would
> consider that node should be killed (and kills it !) and create another one.
> I have a scenario where tasks requires all of JVM memory and YARN is
> continously allocating/deallocating containers. Global task never finishes.
> My proposal is to implement a property IGNITE_OVERHEADMEMORY_PER_NODE like
> property spark.yarn.executor.memoryOverhead in spark (see :
> https://spark.apache.org/docs/latest/running-on-yarn.html#configuration ) . I
> can implement it and create a pull request in github.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)