[
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15765130#comment-15765130
]
Hudson commented on YARN-4844:
------------------------------
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11018 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/11018/])
YARN-4844 (Addendum). Change JobStatus(usedMem,reservedMem,neededMem) (jianhe:
rev 523411d69b37d85046bd8b23001c267daac7a108)
* (edit)
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobStatus.java
* (edit)
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/TypeConverter.java
* (edit)
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/JobClientUnitTest.java
> Add getMemorySize/getVirtualCoresSize to o.a.h.y.api.records.Resource
> ---------------------------------------------------------------------
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: api
> Reporter: Wangda Tan
> Assignee: Wangda Tan
> Priority: Blocker
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-4844-branch-2.8.0016_.patch,
> YARN-4844-branch-2.8.addendum.2.patch, YARN-4844-branch-2.addendum.1_.patch,
> YARN-4844-branch-2.addendum.2.patch, YARN-4844.1.patch, YARN-4844.10.patch,
> YARN-4844.11.patch, YARN-4844.12.patch, YARN-4844.13.patch,
> YARN-4844.14.patch, YARN-4844.15.patch, YARN-4844.16.branch-2.patch,
> YARN-4844.16.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch,
> YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch,
> YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch,
> YARN-4844.9.branch-2.patch, YARN-4844.addendum.3.patch,
> YARN-4844.addendum.4.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending
> resources of running apps to cluster's total pending resources. If a
> problematic app requires too much resources (let's say 1M+ containers, each
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that
> there're many running apps, each of them has capped but still significant
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to
> o.a.h.y.api.records.Resource.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]