[jira] [Commented] (YARN-6622) Document Docker work as experimental

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160753#comment-16160753
 ] 

Hadoop QA commented on YARN-6622:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6622 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868711/YARN-6622.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 110a5a9d4f4a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 722ee84 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17397/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document Docker work as experimental
> 
>
> Key: YARN-6622
> URL: https://issues.apache.org/jira/browse/YARN-6622
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: documentation
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6622.001.patch
>
>
> We should update the Docker support documentation calling out the Docker work 
> as experimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7172) ResourceCalculator.fitsIn() should not take a cluster resource parameter

2017-09-10 Thread Sen Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160741#comment-16160741
 ] 

Sen Zhao commented on YARN-7172:


Patch to solve the issue of checkstyle.

> ResourceCalculator.fitsIn() should not take a cluster resource parameter
> 
>
> Key: YARN-7172
> URL: https://issues.apache.org/jira/browse/YARN-7172
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Sen Zhao
>  Labels: newbie
> Attachments: YARN-7172.001.patch, YARN-7172.002.patch, 
> YARN-7172.003.patch, YARN-7172.004.patch
>
>
> There are numerous calls to {{ClusterNodeTracker.getClusterResource()}} 
> (which involves a lock) to get a value to pass as the cluster resource 
> parameter to {{Resources.fitsIn()}}, but the parameter is (quite reasonably) 
> ignored.  We should remove the parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6426) Compress ZK YARN keys to scale up (especially AppStateData

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160731#comment-16160731
 ] 

Hadoop QA commented on YARN-6426:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-6426 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6426 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861550/zkcompression.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17398/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Compress ZK YARN keys to scale up (especially AppStateData
> --
>
> Key: YARN-6426
> URL: https://issues.apache.org/jira/browse/YARN-6426
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Roni Burd
>Assignee: Roni Burd
>  Labels: patch
> Attachments: zkcompression.patch
>
>
> ZK today stores the protobuf files uncompressed. This is not an issue except 
> that if a customer job has thousands of files, AppStateData will store the 
> user context as a string with multiple URLs and it is easy to get to 1MB or 
> more. 
> This can put unnecessary strain on ZK and make the process slow. 
> The proposal is to simply compress protobufs before sending them to ZK



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6771) Use classloader inside configuration class to make new classes

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6771:
-
Target Version/s: 3.0.0-beta1, 2.8.3  (was: 3.0.0-beta1, 2.8.2)

> Use classloader inside configuration class to make new classes 
> ---
>
> Key: YARN-6771
> URL: https://issues.apache.org/jira/browse/YARN-6771
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.1, 3.0.0-alpha4
>Reporter: Jongyoul Lee
> Attachments: YARN-6771-1.patch, YARN-6771-2.patch, YARN-6771-3.patch, 
> YARN-6771.patch
>
>
> While running {{RpcClientFactoryPBImpl.getClient}}, 
> {{RpcClientFactoryPBImpl}} uses {{localConf.getClassByName}}. But in case of 
> using custom classloader, we have to use {{conf.getClassByName}} because 
> custom classloader is already stored in {{Configuration}} class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6378) Negative usedResources memory in CapacityScheduler

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6378:
-
Target Version/s: 2.8.3  (was: 2.8.2)

> Negative usedResources memory in CapacityScheduler
> --
>
> Key: YARN-6378
> URL: https://issues.apache.org/jira/browse/YARN-6378
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.6.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
>
> Courtesy Thomas Nystrand, we found that on two of our clusters configured 
> with the CapacityScheduler, usedResources occasionally becomes negative. 
> e.g.
> {code}
> 2017-03-15 11:10:09,449 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1487222361993_17177_01 
> container=Container: [ContainerId: container_1487222361993_17177_01_14, 
> NodeId: :27249, NodeHttpAddress: :8042, Resource: 
> , Priority: 2, Token: null, ] queue=: 
> capacity=0.2, absoluteCapacity=0.2, usedResources=, 
> usedCapacity=0.03409091, absoluteUsedCapacity=0.006818182, numApps=1, 
> numContainers=3 clusterResource= type=RACK_LOCAL
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7172) ResourceCalculator.fitsIn() should not take a cluster resource parameter

2017-09-10 Thread Sen Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sen Zhao updated YARN-7172:
---
Attachment: YARN-7172.004.patch

> ResourceCalculator.fitsIn() should not take a cluster resource parameter
> 
>
> Key: YARN-7172
> URL: https://issues.apache.org/jira/browse/YARN-7172
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Sen Zhao
>  Labels: newbie
> Attachments: YARN-7172.001.patch, YARN-7172.002.patch, 
> YARN-7172.003.patch, YARN-7172.004.patch
>
>
> There are numerous calls to {{ClusterNodeTracker.getClusterResource()}} 
> (which involves a lock) to get a value to pass as the cluster resource 
> parameter to {{Resources.fitsIn()}}, but the parameter is (quite reasonably) 
> ignored.  We should remove the parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6842) Implement a new access type for queue

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6842:
-
Target Version/s: 2.8.3  (was: 2.8.2)

> Implement a new access type for queue
> -
>
> Key: YARN-6842
> URL: https://issues.apache.org/jira/browse/YARN-6842
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
> Attachments: YARN-6842.001.patch, YARN-6842.002.patch, 
> YARN-6842.003.patch
>
>
> When we want to access applications of a queue,  only we can do is become the 
> administer of the queue at present.
> But sometimes we only want  authorize someone view applications of a queue 
> but not modify operation.
> In our current mechanism there isn't any way to meet it, so I will implement 
> a new access type for queue to solve
> this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7027) Log aggregation finish time should get logged for trouble shooting.

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-7027:
-
Target Version/s: 2.8.3  (was: 2.8.2)

> Log aggregation finish time should get logged for trouble shooting.
> ---
>
> Key: YARN-7027
> URL: https://issues.apache.org/jira/browse/YARN-7027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation
>Reporter: Junping Du
>Assignee: Junping Du
>
> Now, RM track application log aggregation status in RMApp and the status 
> change is triggered by NM heartbeat with log aggregation report. For each 
> node's log aggregation status change from in-progress 
> (NOT_START,RUNNING,RUNNING_WITH_FAILURE) to final status (SUCCEEDED,FAILED, 
> TIMEOUT), it trigger an aggregation for log aggregation status: 
> updateLogAggregationStatus(). The whole progress is log less and we cannot 
> trace the log aggregation problem (delay of log aggregation, etc.) from RM 
> (or NM) log. We should add more log here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7163) RMContext need not to be injected to webapp and other Always Running services.

2017-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160721#comment-16160721
 ] 

Hudson commented on YARN-7163:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12831 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12831/])
YARN-7163. RMContext need not to be injected to webapp and other Always 
(sunilg: rev 722ee841948db1f246f0056acec9a1ac464fe1f9)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestAppPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/ContainerPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/RMDelegationTokenSecretManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMTokens.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebAppFairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRedirectionErrorPage.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMContainerBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestTokenClientRMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/ContainerBlock.java


> RMContext need not to be injected to webapp and other Always Running services.
> --
>
> Key: YARN-7163
> URL: https://issues.apache.org/jira/browse/YARN-7163
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: suspect-1.png, suspect-2.png, YARN-7163.01.patch, 
> YARN-7163.02.patch, YARN-7163.03.patch, YARN-7163.03.patch, 
> YARN-7163-branch-2.01.patch
>
>
> It is observed that RM crashes with heap space OOM in secure cluster(http 
> authentication is kerborse) when RM HA is enabled. 
> Scenario is 
> 1. Start RM in HA secure mode. Lets say RM1 is active mode.
> 2. Run many applications so that it uses greater than 50% of heap space 
> configured. Lets say, if heap space is 2GB, then run applications that occupy 
> 1.5GB of heap space. 
> 3. Switch RM to StandBy and bring back to Active! While 

[jira] [Updated] (YARN-6862) Nodemanager resource usage metrics sometimes are negative

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6862:
-
Target Version/s: 2.8.3  (was: 2.8.2)

> Nodemanager resource usage metrics sometimes are negative
> -
>
> Key: YARN-6862
> URL: https://issues.apache.org/jira/browse/YARN-6862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.2
>Reporter: YunFan Zhou
>Assignee: YunFan Zhou
>
> When we collect real-time metrics of resource usage in NM, we found those 
> values sometimes are invalid.
> For example, the following are values when collected at some point:
> "milliVcoresUsed":-5808,
> "currentPmemUsage":-1,
> "currentVmemUsage":-1,
> "cpuUsagePercentPerCore":-968.1026
> "cpuUsageTotalCoresPercentage":-24.202564,
> "pmemLimit":2147483648,
> "vmemLimit":4509715456
> There are many negative values,  there may a bug in NM. 
> We should fix it, because the real-time metrics of NM is pretty important for 
> us sometimes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6622) Document Docker work as experimental

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6622:
-
Target Version/s: 3.0.0-beta1, 2.8.3  (was: 2.8.1, 3.0.0-beta1)

> Document Docker work as experimental
> 
>
> Key: YARN-6622
> URL: https://issues.apache.org/jira/browse/YARN-6622
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: documentation
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6622.001.patch
>
>
> We should update the Docker support documentation calling out the Docker work 
> as experimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Closed] (YARN-6692) Delay pause when container is localizing

2017-09-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh closed YARN-6692.
-

> Delay pause when container is localizing
> 
>
> Key: YARN-6692
> URL: https://issues.apache.org/jira/browse/YARN-6692
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Jose Miguel Arreola
>Assignee: Jose Miguel Arreola
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> If a container receives a Pause event while localizing, allow container 
> finish localizing and then pause it



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6692) Delay pause when container is localizing

2017-09-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh resolved YARN-6692.
---
Resolution: Invalid

Closing this, since it is not a valid scenario currently.

> Delay pause when container is localizing
> 
>
> Key: YARN-6692
> URL: https://issues.apache.org/jira/browse/YARN-6692
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Jose Miguel Arreola
>Assignee: Jose Miguel Arreola
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> If a container receives a Pause event while localizing, allow container 
> finish localizing and then pause it



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6426) Compress ZK YARN keys to scale up (especially AppStateData

2017-09-10 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6426:
-
Target Version/s: 2.9.0, 3.0.0-beta1, 2.8.3  (was: 2.9.0, 2.8.1, 
3.0.0-beta1)

> Compress ZK YARN keys to scale up (especially AppStateData
> --
>
> Key: YARN-6426
> URL: https://issues.apache.org/jira/browse/YARN-6426
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Roni Burd
>Assignee: Roni Burd
>  Labels: patch
> Attachments: zkcompression.patch
>
>
> ZK today stores the protobuf files uncompressed. This is not an issue except 
> that if a customer job has thousands of files, AppStateData will store the 
> user context as a string with multiple URLs and it is easy to get to 1MB or 
> more. 
> This can put unnecessary strain on ZK and make the process slow. 
> The proposal is to simply compress protobufs before sending them to ZK



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7072) Add a new log aggregation file format controller

2017-09-10 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160695#comment-16160695
 ] 

Junping Du commented on YARN-7072:
--

I also verified locally. Most tests get passed locally as well, and the only 
failure is TestNodeStatusUpdater. It failed also without the patch, so I think 
it should be unrelated. We should file a separated JIRA to track the test 
failure on branch-2.
+1. Committing branch-2 patch.

> Add a new log aggregation file format controller
> 
>
> Key: YARN-7072
> URL: https://issues.apache.org/jira/browse/YARN-7072
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7072-branch-2.001.patch, YARN-7072-trunk.001.patch, 
> YARN-7072.trunk.002.patch, YARN-7072-trunk.003.patch, 
> YARN-7072-trunk.004.patch, YARN-7072-trunk.005.patch, 
> YARN-7072-trunk.006.patch, YARN-7072-trunk.007.patch, 
> YARN-7072-trunk.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7136) Additional Performance Improvement for Resource Profile Feature

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160696#comment-16160696
 ] 

Hadoop QA commented on YARN-7136:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
11s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
16s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
25s{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
19s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 16 new + 277 unchanged - 8 fixed = 293 total (was 285) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 36s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 31s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m  8s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (YARN-7163) RMContext need not to be injected to webapp and other Always Running services.

2017-09-10 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160688#comment-16160688
 ] 

Sunil G commented on YARN-7163:
---

branch-2 test failures are unrelated to patch. Committing to trunk/branch-2.

> RMContext need not to be injected to webapp and other Always Running services.
> --
>
> Key: YARN-7163
> URL: https://issues.apache.org/jira/browse/YARN-7163
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: suspect-1.png, suspect-2.png, YARN-7163.01.patch, 
> YARN-7163.02.patch, YARN-7163.03.patch, YARN-7163.03.patch, 
> YARN-7163-branch-2.01.patch
>
>
> It is observed that RM crashes with heap space OOM in secure cluster(http 
> authentication is kerborse) when RM HA is enabled. 
> Scenario is 
> 1. Start RM in HA secure mode. Lets say RM1 is active mode.
> 2. Run many applications so that it uses greater than 50% of heap space 
> configured. Lets say, if heap space is 2GB, then run applications that occupy 
> 1.5GB of heap space. 
> 3. Switch RM to StandBy and bring back to Active! While recovering 
> applications from state store, RM crashes with OOM. 
> *Note* : This issue will happen only when RM is started as ACTIVE directly. 
> (not switched from standby to active during start of JVM)
> Heap dump shows that RMAuthenticationFilter holds 60% heap space! And other 
> 40% held by RMAppState which is during recovering from state store. This 
> exceeds the heap space and crashes with OOM. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7163) RMContext need not to be injected to webapp and other Always Running services.

2017-09-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7163:
--
Summary: RMContext need not to be injected to webapp and other Always 
Running services.  (was: RM crashes with OOM in secured cluster when HA is 
enabled)

> RMContext need not to be injected to webapp and other Always Running services.
> --
>
> Key: YARN-7163
> URL: https://issues.apache.org/jira/browse/YARN-7163
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
> Attachments: suspect-1.png, suspect-2.png, YARN-7163.01.patch, 
> YARN-7163.02.patch, YARN-7163.03.patch, YARN-7163.03.patch, 
> YARN-7163-branch-2.01.patch
>
>
> It is observed that RM crashes with heap space OOM in secure cluster(http 
> authentication is kerborse) when RM HA is enabled. 
> Scenario is 
> 1. Start RM in HA secure mode. Lets say RM1 is active mode.
> 2. Run many applications so that it uses greater than 50% of heap space 
> configured. Lets say, if heap space is 2GB, then run applications that occupy 
> 1.5GB of heap space. 
> 3. Switch RM to StandBy and bring back to Active! While recovering 
> applications from state store, RM crashes with OOM. 
> *Note* : This issue will happen only when RM is started as ACTIVE directly. 
> (not switched from standby to active during start of JVM)
> Heap dump shows that RMAuthenticationFilter holds 60% heap space! And other 
> 40% held by RMAppState which is during recovering from state store. This 
> exceeds the heap space and crashes with OOM. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7172) ResourceCalculator.fitsIn() should not take a cluster resource parameter

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160668#comment-16160668
 ] 

Hadoop QA commented on YARN-7172:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 255 unchanged - 1 fixed = 257 total (was 256) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 0 new + 4462 unchanged - 1 fixed = 4462 total (was 4463) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 48s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7172 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886332/YARN-7172.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 631713a0e97e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / aa4b6fb |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-5887) Policies for choosing which opportunistic containers to kill

2017-09-10 Thread Hugo Kiyodi Oshiro (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160621#comment-16160621
 ] 

Hugo Kiyodi Oshiro commented on YARN-5887:
--

Hi. Considering the approach for killing container based in job completion:
1. Where would be a good place to pass the information to NM. 
ContainerLaunchContext ?
2. How job complete information should be calculated? numCompletedContainers / 
numTotalContainers is OK?


> Policies for choosing which opportunistic containers to kill
> 
>
> Key: YARN-5887
> URL: https://issues.apache.org/jira/browse/YARN-5887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>
> When a guaranteed container arrives at an NM but there are no resources to 
> start its execution, opportunistic containers will be killed to make space 
> for the guaranteed container.
> At the moment, we kill opportunistic containers in reverse order of arrival 
> (first the most recently started ones). This is not always the right 
> decision. 
> For example, we might want to minimize the number of containers killed: to 
> start a 6GB container, we could kill one 6GB opportunistic or three 2GB ones. 
> Another example would be to refrain from killing containers of jobs that are 
> very close to completion (we have to pass job completion information to the 
> NM in that case).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7172) ResourceCalculator.fitsIn() should not take a cluster resource parameter

2017-09-10 Thread Sen Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sen Zhao updated YARN-7172:
---
Attachment: YARN-7172.003.patch

> ResourceCalculator.fitsIn() should not take a cluster resource parameter
> 
>
> Key: YARN-7172
> URL: https://issues.apache.org/jira/browse/YARN-7172
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Sen Zhao
>  Labels: newbie
> Attachments: YARN-7172.001.patch, YARN-7172.002.patch, 
> YARN-7172.003.patch
>
>
> There are numerous calls to {{ClusterNodeTracker.getClusterResource()}} 
> (which involves a lock) to get a value to pass as the cluster resource 
> parameter to {{Resources.fitsIn()}}, but the parameter is (quite reasonably) 
> ignored.  We should remove the parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7172) ResourceCalculator.fitsIn() should not take a cluster resource parameter

2017-09-10 Thread Sen Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160590#comment-16160590
 ] 

Sen Zhao commented on YARN-7172:


Thanks, [~templedf]. I will submit a patch to resolve this.

> ResourceCalculator.fitsIn() should not take a cluster resource parameter
> 
>
> Key: YARN-7172
> URL: https://issues.apache.org/jira/browse/YARN-7172
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Sen Zhao
>  Labels: newbie
> Attachments: YARN-7172.001.patch, YARN-7172.002.patch
>
>
> There are numerous calls to {{ClusterNodeTracker.getClusterResource()}} 
> (which involves a lock) to get a value to pass as the cluster resource 
> parameter to {{Resources.fitsIn()}}, but the parameter is (quite reasonably) 
> ignored.  We should remove the parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7183) YARN - State vs Final Status - Discrepancy in 2.8.1

2017-09-10 Thread Anbu Cheeralan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anbu Cheeralan updated YARN-7183:
-
Description: 
Same Spark application results in different behavior between Hadoop 2.8.0 and 
2.8.1
In 2.8.0 UI , FinalStatus is "FAILED" and State is "FAILED" 
In 2.8.1 UI, FinalStatus is "FAILED" and State is "FINISHED".



  was:
Same application results in different behavior between Hadoop 2.8.0 and 2.8.1
In 2.8.0 UI , FinalStatus is "FAILED" and State is "FAILED" 
In 2.8.1 UI, FinalStatus is "FAILED" and State is "FINISHED".




> YARN - State vs Final Status - Discrepancy in 2.8.1
> ---
>
> Key: YARN-7183
> URL: https://issues.apache.org/jira/browse/YARN-7183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.8.1
> Environment: CENT OS
>Reporter: Anbu Cheeralan
>
> Same Spark application results in different behavior between Hadoop 2.8.0 and 
> 2.8.1
> In 2.8.0 UI , FinalStatus is "FAILED" and State is "FAILED" 
> In 2.8.1 UI, FinalStatus is "FAILED" and State is "FINISHED".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7183) YARN - State vs Final Status - Discrepancy in 2.8.1

2017-09-10 Thread Anbu Cheeralan (JIRA)
Anbu Cheeralan created YARN-7183:


 Summary: YARN - State vs Final Status - Discrepancy in 2.8.1
 Key: YARN-7183
 URL: https://issues.apache.org/jira/browse/YARN-7183
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api
Affects Versions: 2.8.1
 Environment: CENT OS
Reporter: Anbu Cheeralan


Same application results in different behavior between Hadoop 2.8.0 and 2.8.1
In 2.8.0 UI , FinalStatus is "FAILED" and State is "FAILED" 
In 2.8.1 UI, FinalStatus is "FAILED" and State is "FINISHED".





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7013) merge related work for YARN-3926 branch

2017-09-10 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7013:
-
Attachment: YARN-7013.008.patch

Attached ver.008 patch, included YARN-7136 (015) patch.

> merge related work for YARN-3926 branch
> ---
>
> Key: YARN-7013
> URL: https://issues.apache.org/jira/browse/YARN-7013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7013.001.patch, YARN-7013.002.patch, 
> YARN-7013.003.patch, YARN-7013.004.patch, YARN-7013.005.patch, 
> YARN-7013.006.patch, YARN-7013.008.patch
>
>
> To run jenkins for whole branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7136) Additional Performance Improvement for Resource Profile Feature

2017-09-10 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7136:
-
Attachment: YARN-7136.YARN-3926.015.patch

Done, thanks [~templedf]. 

Attached ver.15 patch.

> Additional Performance Improvement for Resource Profile Feature
> ---
>
> Key: YARN-7136
> URL: https://issues.apache.org/jira/browse/YARN-7136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7136.001.patch, YARN-7136.YARN-3926.001.patch, 
> YARN-7136.YARN-3926.002.patch, YARN-7136.YARN-3926.003.patch, 
> YARN-7136.YARN-3926.004.patch, YARN-7136.YARN-3926.005.patch, 
> YARN-7136.YARN-3926.006.patch, YARN-7136.YARN-3926.007.patch, 
> YARN-7136.YARN-3926.008.patch, YARN-7136.YARN-3926.009.patch, 
> YARN-7136.YARN-3926.010.patch, YARN-7136.YARN-3926.011.patch, 
> YARN-7136.YARN-3926.012.patch, YARN-7136.YARN-3926.013.patch, 
> YARN-7136.YARN-3926.014.patch, YARN-7136.YARN-3926.015.patch
>
>
> This JIRA is plan to add following misc perf improvements:
> 1) Use final int in Resources/ResourceCalculator to cache 
> #known-resource-types. (Significant improvement).
> 2) Catch Java's ArrayOutOfBound Exception instead of checking array.length 
> every time. (Significant improvement).
> 3) Avoid setUnit validation (which is a HashSet lookup) when initialize 
> default Memory/VCores ResourceInformation (Significant improvement).
> 4) Avoid unnecessary loop array in Resource#toString/hashCode. (Some 
> improvement).
> 5) Removed readOnlyResources in BaseResource. (Minor improvement).
> 6) Removed enum: MandatoryResources, use final integer instead. (Minor 
> improvement).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7136) Additional Performance Improvement for Resource Profile Feature

2017-09-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160526#comment-16160526
 ] 

Daniel Templeton commented on YARN-7136:


Looking better.  Not loving the ternaries, though.  In the case of 
{{LightWeightResource}}, the ternary follows an _if_.  Seems awkward not to 
just continue with an _else-if_.  In the case of {{Resource}} the ternary 
actually adds an extra comparison over _if-greater-else-if-less_.  Can we just 
make them regular old _if_ and _else_ statements?

> Additional Performance Improvement for Resource Profile Feature
> ---
>
> Key: YARN-7136
> URL: https://issues.apache.org/jira/browse/YARN-7136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7136.001.patch, YARN-7136.YARN-3926.001.patch, 
> YARN-7136.YARN-3926.002.patch, YARN-7136.YARN-3926.003.patch, 
> YARN-7136.YARN-3926.004.patch, YARN-7136.YARN-3926.005.patch, 
> YARN-7136.YARN-3926.006.patch, YARN-7136.YARN-3926.007.patch, 
> YARN-7136.YARN-3926.008.patch, YARN-7136.YARN-3926.009.patch, 
> YARN-7136.YARN-3926.010.patch, YARN-7136.YARN-3926.011.patch, 
> YARN-7136.YARN-3926.012.patch, YARN-7136.YARN-3926.013.patch, 
> YARN-7136.YARN-3926.014.patch
>
>
> This JIRA is plan to add following misc perf improvements:
> 1) Use final int in Resources/ResourceCalculator to cache 
> #known-resource-types. (Significant improvement).
> 2) Catch Java's ArrayOutOfBound Exception instead of checking array.length 
> every time. (Significant improvement).
> 3) Avoid setUnit validation (which is a HashSet lookup) when initialize 
> default Memory/VCores ResourceInformation (Significant improvement).
> 4) Avoid unnecessary loop array in Resource#toString/hashCode. (Some 
> improvement).
> 5) Removed readOnlyResources in BaseResource. (Minor improvement).
> 6) Removed enum: MandatoryResources, use final integer instead. (Minor 
> improvement).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-09-10 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160392#comment-16160392
 ] 

Manikandan R commented on YARN-65:
--

Junit failures are not related to this patch.

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch, 
> YARN-65.008.patch, YARN-65.009.patch, YARN-65.010.patch, YARN-65.011.patch, 
> YARN-65.012.patch, YARN-65.013.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160368#comment-16160368
 ] 

Hadoop QA commented on YARN-65:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 443 unchanged - 1 fixed = 443 total (was 444) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.TestAbstractYarnScheduler |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-65 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12886302/YARN-65.013.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e29617026d6c 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / aa4b6fb |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17393/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17393/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

[jira] [Updated] (YARN-65) Reduce RM app memory footprint once app has completed

2017-09-10 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-65:
-
Attachment: YARN-65.013.patch

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch, 
> YARN-65.008.patch, YARN-65.009.patch, YARN-65.010.patch, YARN-65.011.patch, 
> YARN-65.012.patch, YARN-65.013.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2017-09-10 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160348#comment-16160348
 ] 

Manikandan R commented on YARN-65:
--

Fixed {{TestApplicationLifetimeMonitor}} unit failure and attached patch for 
the same.

> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>Assignee: Manikandan R
> Attachments: YARN-65.001.patch, YARN-65.002.patch, YARN-65.003.patch, 
> YARN-65.004.patch, YARN-65.005.patch, YARN-65.006.patch, YARN-65.007.patch, 
> YARN-65.008.patch, YARN-65.009.patch, YARN-65.010.patch, YARN-65.011.patch, 
> YARN-65.012.patch, YARN-65.013.patch
>
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7136) Additional Performance Improvement for Resource Profile Feature

2017-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16160278#comment-16160278
 ] 

Hadoop QA commented on YARN-7136:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
39s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
58s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
0s{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 16 new + 277 unchanged - 8 fixed = 293 total (was 285) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m  1s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 36s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}188m  4s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Updated] (YARN-7136) Additional Performance Improvement for Resource Profile Feature

2017-09-10 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7136:
--
Attachment: YARN-7136.YARN-3926.014.patch

Updating new patch after fixing compile issue

> Additional Performance Improvement for Resource Profile Feature
> ---
>
> Key: YARN-7136
> URL: https://issues.apache.org/jira/browse/YARN-7136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7136.001.patch, YARN-7136.YARN-3926.001.patch, 
> YARN-7136.YARN-3926.002.patch, YARN-7136.YARN-3926.003.patch, 
> YARN-7136.YARN-3926.004.patch, YARN-7136.YARN-3926.005.patch, 
> YARN-7136.YARN-3926.006.patch, YARN-7136.YARN-3926.007.patch, 
> YARN-7136.YARN-3926.008.patch, YARN-7136.YARN-3926.009.patch, 
> YARN-7136.YARN-3926.010.patch, YARN-7136.YARN-3926.011.patch, 
> YARN-7136.YARN-3926.012.patch, YARN-7136.YARN-3926.013.patch, 
> YARN-7136.YARN-3926.014.patch
>
>
> This JIRA is plan to add following misc perf improvements:
> 1) Use final int in Resources/ResourceCalculator to cache 
> #known-resource-types. (Significant improvement).
> 2) Catch Java's ArrayOutOfBound Exception instead of checking array.length 
> every time. (Significant improvement).
> 3) Avoid setUnit validation (which is a HashSet lookup) when initialize 
> default Memory/VCores ResourceInformation (Significant improvement).
> 4) Avoid unnecessary loop array in Resource#toString/hashCode. (Some 
> improvement).
> 5) Removed readOnlyResources in BaseResource. (Minor improvement).
> 6) Removed enum: MandatoryResources, use final integer instead. (Minor 
> improvement).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org