[jira] [Created] (YARN-4387) Fix FairScheduler log message

2015-11-24 Thread Xin Wang (JIRA)
Xin Wang created YARN-4387:
--

 Summary: Fix FairScheduler log message
 Key: YARN-4387
 URL: https://issues.apache.org/jira/browse/YARN-4387
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.7.1
Reporter: Xin Wang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix FairScheduler log message

2015-11-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15023962#comment-15023962
 ] 

ASF GitHub Bot commented on YARN-4387:
--

GitHub user vesense opened a pull request:

https://github.com/apache/hadoop/pull/57

[YARN-4387] Fix FairScheduler log message



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/hadoop patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/57.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #57


commit 26e1ab545ce0f16508e97237e5750ac9b4602069
Author: Xin Wang 
Date:   2015-11-24T08:09:46Z

Fix FairScheduler log message




> Fix FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix FairScheduler log message

2015-11-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15023978#comment-15023978
 ] 

ASF GitHub Bot commented on YARN-4387:
--

Github user vesense closed the pull request at:

https://github.com/apache/hadoop/pull/57


> Fix FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix FairScheduler log message

2015-11-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15023977#comment-15023977
 ] 

ASF GitHub Bot commented on YARN-4387:
--

Github user vesense commented on the pull request:

https://github.com/apache/hadoop/pull/57#issuecomment-159192789
  
Reported the issue to JIRA: https://issues.apache.org/jira/browse/YARN-4387
So, close this PR.


> Fix FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix FairScheduler log message

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15023984#comment-15023984
 ] 

Tsuyoshi Ozawa commented on YARN-4387:
--

+1, checking this in.

> Fix FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-4387:
-
Summary: Fix typo in FairScheduler log message  (was: Fix FairScheduler log 
message)

> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-4387:
-
Attachment: YARN-4387.001.patch

> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-4387:
-
Hadoop Flags: Reviewed

> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3980) Plumb resource-utilization info in node heartbeat through to the scheduler

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15023999#comment-15023999
 ] 

Hudson commented on YARN-3980:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8871 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8871/])
YARN-3980. Plumb resource-utilization info in node heartbeat through to (kasha: 
rev 52948bb20bd1446164df1d3920c46c96dad750ae)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/records/impl/pb/ResourceUtilizationPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* hadoop-yarn-project/CHANGES.txt


> Plumb resource-utilization info in node heartbeat through to the scheduler
> --
>
> Key: YARN-3980
> URL: https://issues.apache.org/jira/browse/YARN-3980
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: YARN-3980-v0.patch, YARN-3980-v1.patch, 
> YARN-3980-v2.patch, YARN-3980-v3.patch, YARN-3980-v4.patch, 
> YARN-3980-v5.patch, YARN-3980-v6.patch, YARN-3980-v7.patch, 
> YARN-3980-v8.patch, YARN-3980-v9.patch
>
>
> YARN-1012 and YARN-3534 collect resource utilization information for all 
> containers and the node respectively and send it to the RM on node heartbeat. 
> We should plumb it through to the scheduler so the scheduler can make use of 
> it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4367) SLS webapp doesn't load

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024015#comment-15024015
 ] 

Hudson commented on YARN-4367:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8872 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8872/])
CHANGES.txt: add YARN-4367 to 2.8.0. (ozawa: rev 
fb0f09e46b456789ec1c7470873b6de231430773)
* hadoop-yarn-project/CHANGES.txt


> SLS webapp doesn't load
> ---
>
> Key: YARN-4367
> URL: https://issues.apache.org/jira/browse/YARN-4367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.8.0
>
> Attachments: YARN-4367-branch-2.1.patch, YARN-4367-branch-2.2.patch, 
> YARN-4367-branch-2.patch
>
>
> When I run the SLS, the webapp doesn't load and I see the following error:
> {noformat}
> 15/11/17 15:33:30 INFO resourcemanager.ResourceManager: Using Scheduler: 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper
> java.lang.NullPointerException
> at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:87)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:483)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:181)
> at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler(ResourceManager.java:299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4349) Support CallerContext in YARN

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024024#comment-15024024
 ] 

Hudson commented on YARN-4349:
--

ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #635 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/635/])
YARN-4349. Support CallerContext in YARN. Contributed by Wangda Tan (jianhe: 
rev 8676a118a12165ae5a8b80a2a4596c133471ebc1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/ApplicationStateDataPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/metrics/ApplicationMetricsConstants.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMAuditLogger.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/SystemMetricsPublisher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/proto/yarn_server_resourcemanager_recovery.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TestSystemMetricsPublisher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ToolRunner.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ApplicationCreatedEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationStateData.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java


> Supp

[jira] [Commented] (YARN-4371) "yarn application -kill" should take multiple application ids

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024028#comment-15024028
 ] 

Tsuyoshi Ozawa commented on YARN-4371:
--

[~sunilg] thank you for the initial patch. I looked over the patch and have a 
comment about the design. 

In the patch, a new RPC, {{killApplication(List 
applicationIds)}}, is added. IMHO, it's better to call multiple 
{{killApplication(ApplicationId applicationId)}} since it's simpler and I think 
killApplication is not called too much.  Could you update so?




> "yarn application -kill" should take multiple application ids
> -
>
> Key: YARN-4371
> URL: https://issues.apache.org/jira/browse/YARN-4371
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Sunil G
> Attachments: 0001-YARN-4371.patch
>
>
> Currently we cannot pass multiple applications to "yarn application -kill" 
> command. The command should take multiple application ids at the same time. 
> Each entries should be separated with whitespace like:
> {code}
> yarn application -kill application_1234_0001 application_1234_0007 
> application_1234_0012
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4371) "yarn application -kill" should take multiple application ids

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024035#comment-15024035
 ] 

Sunil G commented on YARN-4371:
---

Thank you. Sure, I also had a similar thought,  then thought of improving 
number of calls/response time. But as u mentioned and by seeing code, it comes 
with more complexity. I ll upload a patch with a looping which will kill each 
application one by one. 

I ll also try to see whether we can improve the sleep which we do in client 
side to finish kill,  when multiple apps are to be killed. 

> "yarn application -kill" should take multiple application ids
> -
>
> Key: YARN-4371
> URL: https://issues.apache.org/jira/browse/YARN-4371
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>Assignee: Sunil G
> Attachments: 0001-YARN-4371.patch
>
>
> Currently we cannot pass multiple applications to "yarn application -kill" 
> command. The command should take multiple application ids at the same time. 
> Each entries should be separated with whitespace like:
> {code}
> yarn application -kill application_1234_0001 application_1234_0007 
> application_1234_0012
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024037#comment-15024037
 ] 

Akira AJISAKA commented on YARN-4298:
-

+1 for v1 patch. I manually tested the patch by the following commands.
{noformat}
$ mvn install -DskipTests
$ cd hadoop-yarn-project/hadoop-yarn
$ mvn findbugs:findbugs
{noformat}
There are no warnings in hadoop-yarn-project module after applying the patch. 
Jenkins reports findbugs warnings before applying the patch. I'll file a jira 
for this issue.

> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024044#comment-15024044
 ] 

Sunil G commented on YARN-4298:
---

Thanks Akira. Yes, manual runs are passing fine. 

> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-4298:

Target Version/s: 2.8.0

> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Xin Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024056#comment-15024056
 ] 

Xin Wang commented on YARN-4387:


Thanks.

> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Xin Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024058#comment-15024058
 ] 

Xin Wang commented on YARN-4387:


Thanks.

> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024057#comment-15024057
 ] 

Akira AJISAKA commented on YARN-4298:
-

Thanks [~sunilg] for creating and testing the patch. Filed YETUS-207.

> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3980) Plumb resource-utilization info in node heartbeat through to the scheduler

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024063#comment-15024063
 ] 

Hudson commented on YARN-3980:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2653 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2653/])
YARN-3980. Plumb resource-utilization info in node heartbeat through to (kasha: 
rev 52948bb20bd1446164df1d3920c46c96dad750ae)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/records/impl/pb/ResourceUtilizationPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* hadoop-yarn-project/CHANGES.txt


> Plumb resource-utilization info in node heartbeat through to the scheduler
> --
>
> Key: YARN-3980
> URL: https://issues.apache.org/jira/browse/YARN-3980
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: YARN-3980-v0.patch, YARN-3980-v1.patch, 
> YARN-3980-v2.patch, YARN-3980-v3.patch, YARN-3980-v4.patch, 
> YARN-3980-v5.patch, YARN-3980-v6.patch, YARN-3980-v7.patch, 
> YARN-3980-v8.patch, YARN-3980-v9.patch
>
>
> YARN-1012 and YARN-3534 collect resource utilization information for all 
> containers and the node respectively and send it to the RM on node heartbeat. 
> We should plumb it through to the scheduler so the scheduler can make use of 
> it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3980) Plumb resource-utilization info in node heartbeat through to the scheduler

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024099#comment-15024099
 ] 

Hudson commented on YARN-3980:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #711 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/711/])
YARN-3980. Plumb resource-utilization info in node heartbeat through to (kasha: 
rev 52948bb20bd1446164df1d3920c46c96dad750ae)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/records/impl/pb/ResourceUtilizationPBImpl.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java


> Plumb resource-utilization info in node heartbeat through to the scheduler
> --
>
> Key: YARN-3980
> URL: https://issues.apache.org/jira/browse/YARN-3980
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: YARN-3980-v0.patch, YARN-3980-v1.patch, 
> YARN-3980-v2.patch, YARN-3980-v3.patch, YARN-3980-v4.patch, 
> YARN-3980-v5.patch, YARN-3980-v6.patch, YARN-3980-v7.patch, 
> YARN-3980-v8.patch, YARN-3980-v9.patch
>
>
> YARN-1012 and YARN-3534 collect resource utilization information for all 
> containers and the node respectively and send it to the RM on node heartbeat. 
> We should plumb it through to the scheduler so the scheduler can make use of 
> it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4367) SLS webapp doesn't load

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024100#comment-15024100
 ] 

Hudson commented on YARN-4367:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #711 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/711/])
CHANGES.txt: add YARN-4367 to 2.8.0. (ozawa: rev 
fb0f09e46b456789ec1c7470873b6de231430773)
* hadoop-yarn-project/CHANGES.txt


> SLS webapp doesn't load
> ---
>
> Key: YARN-4367
> URL: https://issues.apache.org/jira/browse/YARN-4367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.8.0
>
> Attachments: YARN-4367-branch-2.1.patch, YARN-4367-branch-2.2.patch, 
> YARN-4367-branch-2.patch
>
>
> When I run the SLS, the webapp doesn't load and I see the following error:
> {noformat}
> 15/11/17 15:33:30 INFO resourcemanager.ResourceManager: Using Scheduler: 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper
> java.lang.NullPointerException
> at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:87)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:483)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:181)
> at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler(ResourceManager.java:299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024116#comment-15024116
 ] 

Akira AJISAKA commented on YARN-4298:
-

bq. hadoop.yarn.webapp.TestWebApp
The test is broken by HADOOP-12584. Now the patch is reverted, so the test 
doesn't fail.

> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3980) Plumb resource-utilization info in node heartbeat through to the scheduler

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024164#comment-15024164
 ] 

Hudson commented on YARN-3980:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #722 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/722/])
YARN-3980. Plumb resource-utilization info in node heartbeat through to (kasha: 
rev 52948bb20bd1446164df1d3920c46c96dad750ae)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/records/impl/pb/ResourceUtilizationPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java


> Plumb resource-utilization info in node heartbeat through to the scheduler
> --
>
> Key: YARN-3980
> URL: https://issues.apache.org/jira/browse/YARN-3980
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: YARN-3980-v0.patch, YARN-3980-v1.patch, 
> YARN-3980-v2.patch, YARN-3980-v3.patch, YARN-3980-v4.patch, 
> YARN-3980-v5.patch, YARN-3980-v6.patch, YARN-3980-v7.patch, 
> YARN-3980-v8.patch, YARN-3980-v9.patch
>
>
> YARN-1012 and YARN-3534 collect resource utilization information for all 
> containers and the node respectively and send it to the RM on node heartbeat. 
> We should plumb it through to the scheduler so the scheduler can make use of 
> it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-4387:
-
Target Version/s: 2.8.0

> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Priority: Minor
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated YARN-4387:
-
Assignee: Xin Wang

> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024191#comment-15024191
 ] 

Hudson commented on YARN-4298:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8873 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8873/])
YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by 
(aajisaka: rev 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java


> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4349) Support CallerContext in YARN

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024210#comment-15024210
 ] 

Hudson commented on YARN-4349:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2573 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2573/])
YARN-4349. Support CallerContext in YARN. Contributed by Wangda Tan (jianhe: 
rev 8676a118a12165ae5a8b80a2a4596c133471ebc1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/SystemMetricsPublisher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMAuditLogger.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/MockAsm.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/metrics/ApplicationMetricsConstants.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/ApplicationCreatedEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TestSystemMetricsPublisher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStoreTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/impl/pb/ApplicationStateDataPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/proto/yarn_server_resourcemanager_recovery.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationStateData.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ToolRunner.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java


> Support Caller

[jira] [Commented] (YARN-4367) SLS webapp doesn't load

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024215#comment-15024215
 ] 

Hudson commented on YARN-4367:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2654 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2654/])
CHANGES.txt: add YARN-4367 to 2.8.0. (ozawa: rev 
fb0f09e46b456789ec1c7470873b6de231430773)
* hadoop-yarn-project/CHANGES.txt


> SLS webapp doesn't load
> ---
>
> Key: YARN-4367
> URL: https://issues.apache.org/jira/browse/YARN-4367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.8.0
>
> Attachments: YARN-4367-branch-2.1.patch, YARN-4367-branch-2.2.patch, 
> YARN-4367-branch-2.patch
>
>
> When I run the SLS, the webapp doesn't load and I see the following error:
> {noformat}
> 15/11/17 15:33:30 INFO resourcemanager.ResourceManager: Using Scheduler: 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper
> java.lang.NullPointerException
> at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:87)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:483)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:181)
> at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler(ResourceManager.java:299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024216#comment-15024216
 ] 

Hudson commented on YARN-4298:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2654 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2654/])
YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by 
(aajisaka: rev 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java


> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2934) Improve handling of container's stderr

2015-11-24 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024218#comment-15024218
 ] 

Naganarasimha G R commented on YARN-2934:
-

Hi [~varun_saxena], [~vvasudev], [~rohithsharma] & others in the watchers list 
Can you guys take a look on the latest patch ?

> Improve handling of container's stderr 
> ---
>
> Key: YARN-2934
> URL: https://issues.apache.org/jira/browse/YARN-2934
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Gera Shegalov
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2934.v1.001.patch, YARN-2934.v1.002.patch, 
> YARN-2934.v1.003.patch, YARN-2934.v1.004.patch, YARN-2934.v1.005.patch
>
>
> Most YARN applications redirect stderr to some file. That's why when 
> container launch fails with {{ExitCodeException}} the message is empty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3980) Plumb resource-utilization info in node heartbeat through to the scheduler

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024226#comment-15024226
 ] 

Hudson commented on YARN-3980:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1444 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1444/])
YARN-3980. Plumb resource-utilization info in node heartbeat through to (kasha: 
rev 52948bb20bd1446164df1d3920c46c96dad750ae)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/records/impl/pb/ResourceUtilizationPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java


> Plumb resource-utilization info in node heartbeat through to the scheduler
> --
>
> Key: YARN-3980
> URL: https://issues.apache.org/jira/browse/YARN-3980
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: YARN-3980-v0.patch, YARN-3980-v1.patch, 
> YARN-3980-v2.patch, YARN-3980-v3.patch, YARN-3980-v4.patch, 
> YARN-3980-v5.patch, YARN-3980-v6.patch, YARN-3980-v7.patch, 
> YARN-3980-v8.patch, YARN-3980-v9.patch
>
>
> YARN-1012 and YARN-3534 collect resource utilization information for all 
> containers and the node respectively and send it to the RM on node heartbeat. 
> We should plumb it through to the scheduler so the scheduler can make use of 
> it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4367) SLS webapp doesn't load

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024227#comment-15024227
 ] 

Hudson commented on YARN-4367:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1444 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1444/])
CHANGES.txt: add YARN-4367 to 2.8.0. (ozawa: rev 
fb0f09e46b456789ec1c7470873b6de231430773)
* hadoop-yarn-project/CHANGES.txt


> SLS webapp doesn't load
> ---
>
> Key: YARN-4367
> URL: https://issues.apache.org/jira/browse/YARN-4367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.8.0
>
> Attachments: YARN-4367-branch-2.1.patch, YARN-4367-branch-2.2.patch, 
> YARN-4367-branch-2.patch
>
>
> When I run the SLS, the webapp doesn't load and I see the following error:
> {noformat}
> 15/11/17 15:33:30 INFO resourcemanager.ResourceManager: Using Scheduler: 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper
> java.lang.NullPointerException
> at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:87)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:483)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:181)
> at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler(ResourceManager.java:299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024244#comment-15024244
 ] 

Hudson commented on YARN-4298:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #712 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/712/])
YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by 
(aajisaka: rev 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* hadoop-yarn-project/CHANGES.txt


> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024267#comment-15024267
 ] 

Hadoop QA commented on YARN-4387:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 31s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 13s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_85. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 142m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
| JDK v1.7.0_85 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Issue | YARN-4387 |
| GITHUB PR | https://github.com/apache/hadoop/pull/57 |
| Optional

[jira] [Commented] (YARN-4306) Test failure: TestClientRMTokens

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024298#comment-15024298
 ] 

Tsuyoshi Ozawa commented on YARN-4306:
--

This problem still continues on trunk - [~sunilg], could you take a look at 
this problem? 

> Test failure: TestClientRMTokens
> 
>
> Key: YARN-4306
> URL: https://issues.apache.org/jira/browse/YARN-4306
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Sunil G
>Assignee: Sunil G
>
> Tests are getting failed in local also. As part of HADOOP-12321 jenkins run, 
> I see same error.:
> {noformat}testShortCircuitRenewCancelDifferentHostSamePort(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
>   Time elapsed: 0.638 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:363)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancelDifferentHostSamePort(TestClientRMTokens.java:316)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3946) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in CS

2015-11-24 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3946:

Attachment: YARN-3946.v1.004.patch

Hi [~wangda]
  *TestAMAuthorization and TestClientRMTokens* test cases are not related to 
this issue and already there are jiras addressing these testcase failures, but 
{{TestApplicationLimitsByPartition}} is related to the patch which i have 
corrected and also have covered one case when application is not assigned to a 
node, diagnostics shows the information of the node and the reason.

> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> CS
> 
>
> Key: YARN-3946
> URL: https://issues.apache.org/jira/browse/YARN-3946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.6.0
>Reporter: Sumit Nigam
>Assignee: Naganarasimha G R
> Attachments: 3946WebImages.zip, YARN-3946.v1.001.patch, 
> YARN-3946.v1.002.patch, YARN-3946.v1.003.Images.zip, YARN-3946.v1.003.patch, 
> YARN-3946.v1.004.patch
>
>
> Currently there is no direct way to get the exact reason as to why a 
> submitted app is still in ACCEPTED state. It should be possible to know 
> through RM REST API as to what aspect is not being met - say, queue limits 
> being reached, or core/ memory requirement not being met, or AM limit being 
> reached, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024306#comment-15024306
 ] 

Hudson commented on YARN-4387:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8874 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8874/])
YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin (ozawa: 
rev 28dfe721b86ccbaf2ddcfb7e709b226ac766803a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* hadoop-yarn-project/CHANGES.txt


> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3127) Avoid timeline events during RM recovery or restart

2015-11-24 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3127:

Description: 
1.Start RM with HA and ATS configured and run some yarn applications
2.Once applications are finished sucessfully start timeline server
3.Now failover HA form active to standby or restart the node

ATS events for the applications already existing in ATS are resent which is not 
required.


  was:
1.Start RM with HA and ATS configured and run some yarn applications
2.Once applications are finished sucessfully start timeline server
3.Now failover HA form active to standby
4.Access timeline server URL :/applicationhistory

//Note Earlier exception was thrown when accessed. 
Incomplete information is shown in the ATS web UI. i.e. attempt container and 
other information is not displayed.

Also even if timeline server is started with RM, and on RM restart/ recovery 
ATS events for the applications already existing in ATS are resent which is not 
required.



> Avoid timeline events during RM recovery or restart
> ---
>
> Key: YARN-3127
> URL: https://issues.apache.org/jira/browse/YARN-3127
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineserver
>Affects Versions: 2.6.0, 2.7.1
> Environment: RM HA with ATS
>Reporter: Bibin A Chundatt
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: AppTransition.png, YARN-3127.20150213-1.patch, 
> YARN-3127.20150329-1.patch, YARN-3127.20150624-1.patch, 
> YARN-3127.20151123-1.patch
>
>
> 1.Start RM with HA and ATS configured and run some yarn applications
> 2.Once applications are finished sucessfully start timeline server
> 3.Now failover HA form active to standby or restart the node
> ATS events for the applications already existing in ATS are resent which is 
> not required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4367) SLS webapp doesn't load

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024329#comment-15024329
 ] 

Hudson commented on YARN-4367:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #723 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/723/])
CHANGES.txt: add YARN-4367 to 2.8.0. (ozawa: rev 
fb0f09e46b456789ec1c7470873b6de231430773)
* hadoop-yarn-project/CHANGES.txt


> SLS webapp doesn't load
> ---
>
> Key: YARN-4367
> URL: https://issues.apache.org/jira/browse/YARN-4367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.8.0
>
> Attachments: YARN-4367-branch-2.1.patch, YARN-4367-branch-2.2.patch, 
> YARN-4367-branch-2.patch
>
>
> When I run the SLS, the webapp doesn't load and I see the following error:
> {noformat}
> 15/11/17 15:33:30 INFO resourcemanager.ResourceManager: Using Scheduler: 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper
> java.lang.NullPointerException
> at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:87)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:483)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:181)
> at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler(ResourceManager.java:299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024330#comment-15024330
 ] 

Hudson commented on YARN-4298:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #723 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/723/])
YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by 
(aajisaka: rev 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java


> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4306) Test failure: TestClientRMTokens

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024335#comment-15024335
 ] 

Sunil G commented on YARN-4306:
---

Yes. I am looking in this. Will update shortly. Thank you. 

> Test failure: TestClientRMTokens
> 
>
> Key: YARN-4306
> URL: https://issues.apache.org/jira/browse/YARN-4306
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Sunil G
>Assignee: Sunil G
>
> Tests are getting failed in local also. As part of HADOOP-12321 jenkins run, 
> I see same error.:
> {noformat}testShortCircuitRenewCancelDifferentHostSamePort(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
>   Time elapsed: 0.638 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:363)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancelDifferentHostSamePort(TestClientRMTokens.java:316)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024341#comment-15024341
 ] 

Hudson commented on YARN-4387:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #713 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/713/])
YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin (ozawa: 
rev 28dfe721b86ccbaf2ddcfb7e709b226ac766803a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* hadoop-yarn-project/CHANGES.txt


> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4384) updateNodeResource CLI should not accept negative values for resource

2015-11-24 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024356#comment-15024356
 ] 

Junping Du commented on YARN-4384:
--

Test failures are not related and tracked by YARN-4351.

> updateNodeResource CLI should not accept negative values for resource
> -
>
> Key: YARN-4384
> URL: https://issues.apache.org/jira/browse/YARN-4384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Junping Du
> Fix For: 2.8.0
>
> Attachments: YARN-4384.patch
>
>
> updateNodeResource CLI should not accept negative values for MemSize and 
> vCores.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024358#comment-15024358
 ] 

Hudson commented on YARN-4387:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #724 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/724/])
YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin (ozawa: 
rev 28dfe721b86ccbaf2ddcfb7e709b226ac766803a)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java


> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024380#comment-15024380
 ] 

Hudson commented on YARN-4298:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1445 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1445/])
YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by 
(aajisaka: rev 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* hadoop-yarn-project/CHANGES.txt


> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3862) Decide which contents to retrieve and send back in response in TimelineReader

2015-11-24 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024390#comment-15024390
 ] 

Varun Saxena commented on YARN-3862:


{quote}
(TimelineFilterUtils.java)
createHBaseColQualPrefixFilter(): this is still trying to compute the column 
prefix by hand. The main point of introducing getColumnPrefixBytes() on 
ColumnPrefix was to avoid doing this for confs and metrics. Can we rework the 
signatures of createHBaseFilterList() so that we can rely on 
ColumnPrefix.getColumnPrefixBytes()? Ideally all computations of qualifier 
bytes should go through ColumnPrefix.getColumnPrefixBytes().
{quote}
What we are trying to do here is to convert prefixes coming in filters from 
client and try to match(prefix match) them against column qualifier.
We store config and metric names directly as column qualifiers without any 
prefix(Except in flow run prefix table). So there is no fixed column prefix for 
configs and metrics anyways.
Let us say, we have column qualifiers (in config column family) as 
mapreduce.map.java.opts, mapreduce.map.memory.mb, mapreduce.reduce.memory.mb, 
etc.
Now user may want to query all the map related configurations and may send 
{{mapreduce.map}} as prefix. But he can send an invalid prefix like 
{{mapreduce_map}} as well. So prefixes in createHBaseColQualPrefixFilter() can 
be anything and cannot be fetched via a call to 
ColumnPrefix.getColumnPrefixBytes().

bq. I'm not too sure about the name; for other tests we basically combined the 
reader and writer tests. Thoughts on how to make this best fit into the 
existing tests?
Ok. Maybe can move these tests to TestHBaseTimelineStorage. Let me see.

bq. I keep confusing configFilters and confs. 
Maybe confs and metrics can be renamed as configsToRetrieve and 
metricsToRetrieve respectively. Thoughts ?

bq. On a related note, this is probably outside the scope of this JIRA, but I 
see that the configFilter and metricFilter are applied on the client-side.
Yes. This will be handled in YARN-3863. However, event filters, relatesTo and 
isRelatedTo still need to be matched at client side because of the way these 
values are stored in our tables. We can discuss this though.

bq. l.156: Why do we need to check if configFilters == null? 
This will be removed in YARN-3863. This is done because we need to fetch 
configs if we have to match them on client side(As of now till 3863 goes in). 
However we should probably fetch all configs irrespective of confs field if 
match has to be done on client side. This is missed in this patch. This code 
will have to be removed though in 3863.

bq. Related to one of the points above, at least we should add javadoc that 
clearly explains confs and metrics
Agree. Will add.

bq. l.139: nit: typo: releated -> related
Ok.

Other comments are due to YARN-4053 going in. Will fix them in next version of 
the patch.


> Decide which contents to retrieve and send back in response in TimelineReader
> -
>
> Key: YARN-3862
> URL: https://issues.apache.org/jira/browse/YARN-3862
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3862-YARN-2928.wip.01.patch, 
> YARN-3862-YARN-2928.wip.02.patch, YARN-3862-YARN-2928.wip.03.patch, 
> YARN-3862-feature-YARN-2928.wip.03.patch
>
>
> Currently, we will retrieve all the contents of the field if that field is 
> specified in the query API. In case of configs and metrics, this can become a 
> lot of data even though the user doesn't need it. So we need to provide a way 
> to query only a set of configs or metrics.
> As a comma spearated list of configs/metrics to be returned will be quite 
> cumbersome to specify, we have to support either of the following options :
> # Prefix match
> # Regex
> # Group the configs/metrics and query that group.
> We also need a facility to specify a metric time window to return metrics in 
> a that window. This may be useful in plotting graphs 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024411#comment-15024411
 ] 

Hudson commented on YARN-4387:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2655 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2655/])
YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin (ozawa: 
rev 28dfe721b86ccbaf2ddcfb7e709b226ac766803a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* hadoop-yarn-project/CHANGES.txt


> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4380) TestResourceLocalizationService.testDownloadingResourcesOnContainerKill fails intermittently on branch-2.8

2015-11-24 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024432#comment-15024432
 ] 

Varun Saxena commented on YARN-4380:


Ok. Will have a look.

> TestResourceLocalizationService.testDownloadingResourcesOnContainerKill fails 
> intermittently on branch-2.8
> --
>
> Key: YARN-4380
> URL: https://issues.apache.org/jira/browse/YARN-4380
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Varun Saxena
> Attachments: YARN-4380.01.patch, 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell-output.2.txt,
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService-output.txt
>
>
> {quote}
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.361 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> testDownloadingResourcesOnContainerKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService)
>   Time elapsed: 0.109 sec  <<< FAILURE!
> org.mockito.exceptions.verification.junit.ArgumentsAreDifferent:
> Argument(s) are different! Wanted:
> deletionService.delete(
> "user0",
> null,
> 
> );
> -> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testDownloadingResourcesOnContainerKill(TestResourceLocalizationService.java:1322)
> Actual invocation has different arguments:
> deletionService.delete(
> "user0",
> 
> /home/ubuntu/hadoop-dev/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService/0/usercache/user0/appcache/application_314159265358979_0003/container_314159265358979_0003_01_42
> );
> -> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testDownloadingResourcesOnContainerKill(TestResourceLocalizationService.java:1296)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testDownloadingResourcesOnContainerKill(TestResourceLocalizationService.java:1322)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3862) Decide which contents to retrieve and send back in response in TimelineReader

2015-11-24 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024477#comment-15024477
 ] 

Varun Saxena commented on YARN-3862:


bq. Whether we make TimelineFilter part of the object model or not, we'll still 
need to come up with a way to support filter queries on the URLs, no?
Decision about making it part of an object model was to primarily decide on how 
much control we want to give the client.
Moreover, the thought behind making it as part of object model is that the 
client will create an object of type TimelineFilterList and this will converted 
into a JSON string and sent in the query param. Something like below where 
metricFilters is a query param. This can become quite complex as a filterlist 
can have another filterlist in it but at the server side it will be easy to 
parse as JSON converter will do it for us. This though can make the URL quite 
big.
{{&metricFilters=\{"operator": "AND", "filters": \[\{"type": 
"COMPARE","key":"metric1", "value": "12345", "compareop": 
"GREATER_THAN\},\{"type": "COMPARE","key":"metric23", "value": "12", 
"compareop": "EQUALS\}\]\}}}
Or we can alternatively define some other way to represent this. Say something 
like below for instance. Here, we will have to do the parsing ourselves. We can 
go with acronyms like gt for greater than, eq for equals, ge for greater than 
equals and so on. As you can see below, it is exactly same query as above but 
as its not JSON representation, it will be a lot shorter.
{{&metricFilters=(metric1 gt 12345) AND (metric23 eq 12)}}
This is what I meant by that we have to decide whether to keep it as part of 
object model or not.

bq. I just wanted to understand whether we need to make that call as part of 
this JIRA. Did I understand this correctly, or did I miss something important?
Current code is not hooked up to the REST layer, so it wont work end to end. 
However, the current patch has already become quite big. So we can handle REST 
related changes in another JIRA. I am fine with that.


> Decide which contents to retrieve and send back in response in TimelineReader
> -
>
> Key: YARN-3862
> URL: https://issues.apache.org/jira/browse/YARN-3862
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3862-YARN-2928.wip.01.patch, 
> YARN-3862-YARN-2928.wip.02.patch, YARN-3862-YARN-2928.wip.03.patch, 
> YARN-3862-feature-YARN-2928.wip.03.patch
>
>
> Currently, we will retrieve all the contents of the field if that field is 
> specified in the query API. In case of configs and metrics, this can become a 
> lot of data even though the user doesn't need it. So we need to provide a way 
> to query only a set of configs or metrics.
> As a comma spearated list of configs/metrics to be returned will be quite 
> cumbersome to specify, we have to support either of the following options :
> # Prefix match
> # Regex
> # Group the configs/metrics and query that group.
> We also need a facility to specify a metric time window to return metrics in 
> a that window. This may be useful in plotting graphs 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4367) SLS webapp doesn't load

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024507#comment-15024507
 ] 

Hudson commented on YARN-4367:
--

ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #636 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/636/])
CHANGES.txt: add YARN-4367 to 2.8.0. (ozawa: rev 
fb0f09e46b456789ec1c7470873b6de231430773)
* hadoop-yarn-project/CHANGES.txt


> SLS webapp doesn't load
> ---
>
> Key: YARN-4367
> URL: https://issues.apache.org/jira/browse/YARN-4367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.8.0
>
> Attachments: YARN-4367-branch-2.1.patch, YARN-4367-branch-2.2.patch, 
> YARN-4367-branch-2.patch
>
>
> When I run the SLS, the webapp doesn't load and I see the following error:
> {noformat}
> 15/11/17 15:33:30 INFO resourcemanager.ResourceManager: Using Scheduler: 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper
> java.lang.NullPointerException
> at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:87)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:483)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:181)
> at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler(ResourceManager.java:299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3980) Plumb resource-utilization info in node heartbeat through to the scheduler

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024506#comment-15024506
 ] 

Hudson commented on YARN-3980:
--

ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #636 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/636/])
YARN-3980. Plumb resource-utilization info in node heartbeat through to (kasha: 
rev 52948bb20bd1446164df1d3920c46c96dad750ae)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/records/impl/pb/ResourceUtilizationPBImpl.java


> Plumb resource-utilization info in node heartbeat through to the scheduler
> --
>
> Key: YARN-3980
> URL: https://issues.apache.org/jira/browse/YARN-3980
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: YARN-3980-v0.patch, YARN-3980-v1.patch, 
> YARN-3980-v2.patch, YARN-3980-v3.patch, YARN-3980-v4.patch, 
> YARN-3980-v5.patch, YARN-3980-v6.patch, YARN-3980-v7.patch, 
> YARN-3980-v8.patch, YARN-3980-v9.patch
>
>
> YARN-1012 and YARN-3534 collect resource utilization information for all 
> containers and the node respectively and send it to the RM on node heartbeat. 
> We should plumb it through to the scheduler so the scheduler can make use of 
> it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024553#comment-15024553
 ] 

Hudson commented on YARN-4387:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1446 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1446/])
YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin (ozawa: 
rev 28dfe721b86ccbaf2ddcfb7e709b226ac766803a)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java


> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4304) AM max resource configuration per partition to be displayed/updated correctly in UI and in various partition related metrics

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024574#comment-15024574
 ] 

Sunil G commented on YARN-4304:
---

[~bibinchundatt], {{Memory Reserved}} is already a part of ClusterMetrics. 
Could you please help to explain what you intended to add here?
As part of this ticket, I will surely verify  all cluster metrics with and w/o 
label.

> AM max resource configuration per partition to be displayed/updated correctly 
> in UI and in various partition related metrics
> 
>
> Key: YARN-4304
> URL: https://issues.apache.org/jira/browse/YARN-4304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4304.patch
>
>
> As we are supporting per-partition level max AM resource percentage 
> configuration, UI and various metrics also need to display correct 
> configurations related to same. 
> For eg: Current UI still shows am-resource percentage per queue level. This 
> is to be updated correctly when label config is used.
> - Display max-am-percentage per-partition in Scheduler UI (label also) and in 
> ClusterMetrics page
> - Update queue/partition related metrics w.r.t per-partition 
> am-resource-percentage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3946) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in CS

2015-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024623#comment-15024623
 ] 

Hadoop QA commented on YARN-3946:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 1s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s 
{color} | {color:red} Patch generated 15 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 (total was 653, now 664). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 25s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 37s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_85. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 183m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.8.0_66 Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
| JDK v1.7.0_85 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions |
|   | hadoop.yarn.server.resourcemanage

[jira] [Created] (YARN-4388) Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml

2015-11-24 Thread Junping Du (JIRA)
Junping Du created YARN-4388:


 Summary: Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml
 Key: YARN-4388
 URL: https://issues.apache.org/jira/browse/YARN-4388
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Junping Du
Priority: Minor


It is obviously that "mapreduce.job.hdfs-servers" shouldn't belongs to yarn 
configuration so we should move it to mapred-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-4388) Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml

2015-11-24 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reassigned YARN-4388:


Assignee: Junping Du

> Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml
> --
>
> Key: YARN-4388
> URL: https://issues.apache.org/jira/browse/YARN-4388
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
>
> It is obviously that "mapreduce.job.hdfs-servers" shouldn't belongs to yarn 
> configuration so we should move it to mapred-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4386) refreshNodesGracefully() looks at active RMNode list for recommissioning decommissioned nodes

2015-11-24 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-4386:
--
Attachment: YARN-4386-v1.patch

refreshNodesGracefully() : if condition only checks for decommissioning nodes. 
No tests included since this does not exhibit changed behavior before and after 
the change.

> refreshNodesGracefully() looks at active RMNode list for recommissioning 
> decommissioned nodes
> -
>
> Key: YARN-4386
> URL: https://issues.apache.org/jira/browse/YARN-4386
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Minor
> Attachments: YARN-4386-v1.patch
>
>
> In refreshNodesGracefully(), during recommissioning, the entryset from 
> getRMNodes() which has only active nodes (RUNNING, DECOMMISSIONING etc.) is 
> used for checking 'decommissioned' nodes which are present in 
> getInactiveRMNodes() map alone. 
> {code}
> for (Entry entry:rmContext.getRMNodes().entrySet()) { 
> .
>  // Recommissioning the nodes
> if (entry.getValue().getState() == NodeState.DECOMMISSIONING
> || entry.getValue().getState() == NodeState.DECOMMISSIONED) {
>   this.rmContext.getDispatcher().getEventHandler()
>   .handle(new RMNodeEvent(nodeId, RMNodeEventType.RECOMMISSION));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4388) Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml

2015-11-24 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-4388:
-
Target Version/s: 2.8.0

> Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml
> --
>
> Key: YARN-4388
> URL: https://issues.apache.org/jira/browse/YARN-4388
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
> Attachments: YARN-4388.patch
>
>
> It is obviously that "mapreduce.job.hdfs-servers" shouldn't belongs to yarn 
> configuration so we should move it to mapred-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4388) Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml

2015-11-24 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-4388:
-
Attachment: YARN-4388.patch

Upload a patch which is quite straight-forward.

> Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml
> --
>
> Key: YARN-4388
> URL: https://issues.apache.org/jira/browse/YARN-4388
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
> Attachments: YARN-4388.patch
>
>
> It is obviously that "mapreduce.job.hdfs-servers" shouldn't belongs to yarn 
> configuration so we should move it to mapred-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4304) AM max resource configuration per partition to be displayed/updated correctly in UI and in various partition related metrics

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024707#comment-15024707
 ] 

Sunil G commented on YARN-4304:
---

Looks like YARN-3432 is handling  the issue for cluster metrics for Reserved 
Memory. So I will not make changes here for reserved metrics. I will try help 
to review this scenario in YARN-3432. Thanks [~bibinchundatt] for pointing out.

> AM max resource configuration per partition to be displayed/updated correctly 
> in UI and in various partition related metrics
> 
>
> Key: YARN-4304
> URL: https://issues.apache.org/jira/browse/YARN-4304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4304.patch
>
>
> As we are supporting per-partition level max AM resource percentage 
> configuration, UI and various metrics also need to display correct 
> configurations related to same. 
> For eg: Current UI still shows am-resource percentage per queue level. This 
> is to be updated correctly when label config is used.
> - Display max-am-percentage per-partition in Scheduler UI (label also) and in 
> ClusterMetrics page
> - Update queue/partition related metrics w.r.t per-partition 
> am-resource-percentage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024763#comment-15024763
 ] 

Hudson commented on YARN-4298:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2574 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2574/])
YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by 
(aajisaka: rev 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java


> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3980) Plumb resource-utilization info in node heartbeat through to the scheduler

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024760#comment-15024760
 ] 

Hudson commented on YARN-3980:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2574 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2574/])
YARN-3980. Plumb resource-utilization info in node heartbeat through to (kasha: 
rev 52948bb20bd1446164df1d3920c46c96dad750ae)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeStatusEvent.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/records/impl/pb/ResourceUtilizationPBImpl.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestMiniYarnClusterNodeUtilization.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/logaggregationstatus/TestRMAppLogAggregationStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMNodeTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java


> Plumb resource-utilization info in node heartbeat through to the scheduler
> --
>
> Key: YARN-3980
> URL: https://issues.apache.org/jira/browse/YARN-3980
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: YARN-3980-v0.patch, YARN-3980-v1.patch, 
> YARN-3980-v2.patch, YARN-3980-v3.patch, YARN-3980-v4.patch, 
> YARN-3980-v5.patch, YARN-3980-v6.patch, YARN-3980-v7.patch, 
> YARN-3980-v8.patch, YARN-3980-v9.patch
>
>
> YARN-1012 and YARN-3534 collect resource utilization information for all 
> containers and the node respectively and send it to the RM on node heartbeat. 
> We should plumb it through to the scheduler so the scheduler can make use of 
> it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4367) SLS webapp doesn't load

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024761#comment-15024761
 ] 

Hudson commented on YARN-4367:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2574 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2574/])
CHANGES.txt: add YARN-4367 to 2.8.0. (ozawa: rev 
fb0f09e46b456789ec1c7470873b6de231430773)
* hadoop-yarn-project/CHANGES.txt


> SLS webapp doesn't load
> ---
>
> Key: YARN-4367
> URL: https://issues.apache.org/jira/browse/YARN-4367
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.8.0
>
> Attachments: YARN-4367-branch-2.1.patch, YARN-4367-branch-2.2.patch, 
> YARN-4367-branch-2.patch
>
>
> When I run the SLS, the webapp doesn't load and I see the following error:
> {noformat}
> 15/11/17 15:33:30 INFO resourcemanager.ResourceManager: Using Scheduler: 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper
> java.lang.NullPointerException
> at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:87)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:483)
> at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:181)
> at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler(ResourceManager.java:299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4365) FileSystemNodeLabelStore should check for root dir existence on startup

2015-11-24 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024844#comment-15024844
 ] 

Jason Lowe commented on YARN-4365:
--

Thanks for the patch, Kuhu!

There appears to be a mix of overriding and mocking approaches in the test that 
leads to a confusing test.  For example, setFileSystem was promoted to 
protected scope yet that's unnecessary in the current patch.  The test is also 
spying on the node label manager and mocking Configuration unnecessarily.

Instead of all the mocking and stubbing, I think it would be more 
straightforward to simply override setFileSystem and have the test use a "real" 
FileSystemNodeLabelsStore rather than a mocked one where we pass through 
various methods.  The only mock at that point would be the filesystem that 
would be set in the overridden setFileSystem method.

There's also a misleading comment in the test:
{code}
// File Exists returns true the third time
Mockito.when(myStore.fs.exists(Mockito.any(Path.class))).thenReturn(false);
{code}


> FileSystemNodeLabelStore should check for root dir existence on startup
> ---
>
> Key: YARN-4365
> URL: https://issues.apache.org/jira/browse/YARN-4365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Kuhu Shukla
> Attachments: YARN-4365-1.patch
>
>
> If the namenode is in safe mode for some reason then FileSystemNodeLabelStore 
> will prevent the RM from starting since it unconditionally tries to create 
> the root directory for the label store.  If the root directory already exists 
> and no labels are changing then we shouldn't prevent the RM from starting 
> even if the namenode is in safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4365) FileSystemNodeLabelStore should check for root dir existence on startup

2015-11-24 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024847#comment-15024847
 ] 

Kuhu Shukla commented on YARN-4365:
---

Thanks a lot Jason. Will update revised patch shorty.

> FileSystemNodeLabelStore should check for root dir existence on startup
> ---
>
> Key: YARN-4365
> URL: https://issues.apache.org/jira/browse/YARN-4365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Kuhu Shukla
> Attachments: YARN-4365-1.patch
>
>
> If the namenode is in safe mode for some reason then FileSystemNodeLabelStore 
> will prevent the RM from starting since it unconditionally tries to create 
> the root directory for the label store.  If the root directory already exists 
> and no labels are changing then we shouldn't prevent the RM from starting 
> even if the namenode is in safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4388) Cleanup "mapreduce.job.hdfs-servers" from yarn-default.xml

2015-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024854#comment-15024854
 ] 

Hadoop QA commented on YARN-4388:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 27s 
{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
in trunk has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
14s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed with 
JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_85. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_85. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed with 
JDK v1.7.0_

[jira] [Created] (YARN-4389) "yarn.am.blacklisting.enabled" and "yarn.am.blacklisting.disable-failure-threshold" should be app specific rather than a setting for whole YARN cluster

2015-11-24 Thread Junping Du (JIRA)
Junping Du created YARN-4389:


 Summary: "yarn.am.blacklisting.enabled" and 
"yarn.am.blacklisting.disable-failure-threshold" should be app specific rather 
than a setting for whole YARN cluster
 Key: YARN-4389
 URL: https://issues.apache.org/jira/browse/YARN-4389
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications
Reporter: Junping Du
Priority: Critical


"yarn.am.blacklisting.enabled" and 
"yarn.am.blacklisting.disable-failure-threshold" should be application specific 
rather than a setting in cluster level, or we should't maintain 
amBlacklistingEnabled and blacklistDisableThreshold in per rmApp level. We 
should allow each am to override this config, i.e. via submissionContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4380) TestResourceLocalizationService.testDownloadingResourcesOnContainerKill fails intermittently on branch-2.8

2015-11-24 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024872#comment-15024872
 ] 

Varun Saxena commented on YARN-4380:


[~ozawa], checked the test failure.
This issue is coming because we do not have any wait after sending 
DESTROY_APPLICATION_RESOURCES event. And we are immediately checking for 
APPLICATION_RESOURCES_CLEANEDUP event being sent by Resource Localization 
service(upon processing of DESTROY_APPLICATION_RESOURCES).
{code}
  LocalizationEvent destroyApp =
  new ApplicationLocalizationEvent(
LocalizationEventType.DESTROY_APPLICATION_RESOURCES, app);
  spyService.handle(destroyApp);
  verify(applicationBus).handle(argThat(matchesAppDestroy));
{code}

Adding a {{dispatcher.await()}} statement in between spyService.handle and 
verify statement will resolve the issue.
This test case is broken by YARN-90 and not YARN-2902. So we should raise 
another JIRA for it. As you found the issue, you can raise the JIRA and assign 
it to me. I will update a patch there.

> TestResourceLocalizationService.testDownloadingResourcesOnContainerKill fails 
> intermittently on branch-2.8
> --
>
> Key: YARN-4380
> URL: https://issues.apache.org/jira/browse/YARN-4380
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Varun Saxena
> Attachments: YARN-4380.01.patch, 
> org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell-output.2.txt,
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService-output.txt
>
>
> {quote}
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.361 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> testDownloadingResourcesOnContainerKill(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService)
>   Time elapsed: 0.109 sec  <<< FAILURE!
> org.mockito.exceptions.verification.junit.ArgumentsAreDifferent:
> Argument(s) are different! Wanted:
> deletionService.delete(
> "user0",
> null,
> 
> );
> -> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testDownloadingResourcesOnContainerKill(TestResourceLocalizationService.java:1322)
> Actual invocation has different arguments:
> deletionService.delete(
> "user0",
> 
> /home/ubuntu/hadoop-dev/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService/0/usercache/user0/appcache/application_314159265358979_0003/container_314159265358979_0003_01_42
> );
> -> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testDownloadingResourcesOnContainerKill(TestResourceLocalizationService.java:1296)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testDownloadingResourcesOnContainerKill(TestResourceLocalizationService.java:1322)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4389) "yarn.am.blacklisting.enabled" and "yarn.am.blacklisting.disable-failure-threshold" should be app specific rather than a setting for whole YARN cluster

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024920#comment-15024920
 ] 

Sunil G commented on YARN-4389:
---

Yes, +1 for this approach. 
It make more sense to have this per app level.

> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be app specific 
> rather than a setting for whole YARN cluster
> ---
>
> Key: YARN-4389
> URL: https://issues.apache.org/jira/browse/YARN-4389
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Junping Du
>Priority: Critical
>
> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be application 
> specific rather than a setting in cluster level, or we should't maintain 
> amBlacklistingEnabled and blacklistDisableThreshold in per rmApp level. We 
> should allow each am to override this config, i.e. via submissionContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4389) "yarn.am.blacklisting.enabled" and "yarn.am.blacklisting.disable-failure-threshold" should be app specific rather than a setting for whole YARN cluster

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024931#comment-15024931
 ] 

Sunil G commented on YARN-4389:
---

One more doubt here, I do not feel we need to deprecate or remove 
"yarn.am.blacklisting.enabled" and 
"yarn.am.blacklisting.disable-failure-threshold". It can be kept, and we can 
also override if we specify via submission context. This will satisfy both 
cases, will this approach be fine [~djp]

> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be app specific 
> rather than a setting for whole YARN cluster
> ---
>
> Key: YARN-4389
> URL: https://issues.apache.org/jira/browse/YARN-4389
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Junping Du
>Priority: Critical
>
> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be application 
> specific rather than a setting in cluster level, or we should't maintain 
> amBlacklistingEnabled and blacklistDisableThreshold in per rmApp level. We 
> should allow each am to override this config, i.e. via submissionContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4389) "yarn.am.blacklisting.enabled" and "yarn.am.blacklisting.disable-failure-threshold" should be app specific rather than a setting for whole YARN cluster

2015-11-24 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024959#comment-15024959
 ] 

Junping Du commented on YARN-4389:
--

Yes. This is what I was thinking and proposing in description. :)

> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be app specific 
> rather than a setting for whole YARN cluster
> ---
>
> Key: YARN-4389
> URL: https://issues.apache.org/jira/browse/YARN-4389
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Junping Du
>Priority: Critical
>
> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be application 
> specific rather than a setting in cluster level, or we should't maintain 
> amBlacklistingEnabled and blacklistDisableThreshold in per rmApp level. We 
> should allow each am to override this config, i.e. via submissionContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4389) "yarn.am.blacklisting.enabled" and "yarn.am.blacklisting.disable-failure-threshold" should be app specific rather than a setting for whole YARN cluster

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024970#comment-15024970
 ] 

Sunil G commented on YARN-4389:
---

Yes,  :-)  missed the last statement. Thanks for confirming. :-) 

> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be app specific 
> rather than a setting for whole YARN cluster
> ---
>
> Key: YARN-4389
> URL: https://issues.apache.org/jira/browse/YARN-4389
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Junping Du
>Priority: Critical
>
> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be application 
> specific rather than a setting in cluster level, or we should't maintain 
> amBlacklistingEnabled and blacklistDisableThreshold in per rmApp level. We 
> should allow each am to override this config, i.e. via submissionContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4386) refreshNodesGracefully() looks at active RMNode list for recommissioning decommissioned nodes

2015-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024968#comment-15024968
 ] 

Hadoop QA commented on YARN-4386:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 23s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_85. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 154m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_85 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12774079/YARN-4386-v1.pat

[jira] [Commented] (YARN-4389) "yarn.am.blacklisting.enabled" and "yarn.am.blacklisting.disable-failure-threshold" should be app specific rather than a setting for whole YARN cluster

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024975#comment-15024975
 ] 

Sunil G commented on YARN-4389:
---

I could give a hand here if you haven't started. 

> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be app specific 
> rather than a setting for whole YARN cluster
> ---
>
> Key: YARN-4389
> URL: https://issues.apache.org/jira/browse/YARN-4389
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Junping Du
>Priority: Critical
>
> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be application 
> specific rather than a setting in cluster level, or we should't maintain 
> amBlacklistingEnabled and blacklistDisableThreshold in per rmApp level. We 
> should allow each am to override this config, i.e. via submissionContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4389) "yarn.am.blacklisting.enabled" and "yarn.am.blacklisting.disable-failure-threshold" should be app specific rather than a setting for whole YARN cluster

2015-11-24 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024978#comment-15024978
 ] 

Junping Du commented on YARN-4389:
--

Sure. Please feel free to take it and I will help on review and commit.

> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be app specific 
> rather than a setting for whole YARN cluster
> ---
>
> Key: YARN-4389
> URL: https://issues.apache.org/jira/browse/YARN-4389
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Junping Du
>Priority: Critical
>
> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be application 
> specific rather than a setting in cluster level, or we should't maintain 
> amBlacklistingEnabled and blacklistDisableThreshold in per rmApp level. We 
> should allow each am to override this config, i.e. via submissionContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-4389) "yarn.am.blacklisting.enabled" and "yarn.am.blacklisting.disable-failure-threshold" should be app specific rather than a setting for whole YARN cluster

2015-11-24 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-4389:
-

Assignee: Sunil G

> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be app specific 
> rather than a setting for whole YARN cluster
> ---
>
> Key: YARN-4389
> URL: https://issues.apache.org/jira/browse/YARN-4389
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications
>Reporter: Junping Du
>Assignee: Sunil G
>Priority: Critical
>
> "yarn.am.blacklisting.enabled" and 
> "yarn.am.blacklisting.disable-failure-threshold" should be application 
> specific rather than a setting in cluster level, or we should't maintain 
> amBlacklistingEnabled and blacklistDisableThreshold in per rmApp level. We 
> should allow each am to override this config, i.e. via submissionContext.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024987#comment-15024987
 ] 

Hudson commented on YARN-4387:
--

ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #637 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/637/])
YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin (ozawa: 
rev 28dfe721b86ccbaf2ddcfb7e709b226ac766803a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* hadoop-yarn-project/CHANGES.txt


> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4298) Fix findbugs warnings in hadoop-yarn-common

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15024988#comment-15024988
 ] 

Hudson commented on YARN-4298:
--

ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #637 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/637/])
YARN-4298. Fix findbugs warnings in hadoop-yarn-common. Contributed by 
(aajisaka: rev 8c0133f3d4c85733c1e0389d1ee39f1eab58c0b9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/AllocateResponsePBImpl.java
* hadoop-yarn-project/CHANGES.txt


> Fix findbugs warnings in hadoop-yarn-common
> ---
>
> Key: YARN-4298
> URL: https://issues.apache.org/jira/browse/YARN-4298
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Sunil G
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4298.patch, 0002-YARN-4298.patch
>
>
> {noformat}
>  classname='org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.builder;
>  locked 95% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.proto;
>  locked 94% of time' lineNumber='390'/>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.api.protocolrecords.impl.pb.AllocateResponsePBImpl.viaProto;
>  locked 94% of time' lineNumber='390'/>
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4390) Consider container request size during CS preemption

2015-11-24 Thread Eric Payne (JIRA)
Eric Payne created YARN-4390:


 Summary: Consider container request size during CS preemption
 Key: YARN-4390
 URL: https://issues.apache.org/jira/browse/YARN-4390
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler
Affects Versions: 3.0.0, 2.8.0, 2.7.3
Reporter: Eric Payne
Assignee: Eric Payne


There are multiple reasons why preemption could unnecessarily preempt 
containers. One is that an app could be requesting a large container (say 
8-GB), and the preemption monitor could conceivably preempt multiple containers 
(say 8, 1-GB containers) in order to fill the large container request. These 
smaller containers would then be rejected by the requesting AM and potentially 
given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4390) Consider container request size during CS preemption

2015-11-24 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025044#comment-15025044
 ] 

Eric Payne commented on YARN-4390:
--

One approach to alleviate this would be to add a buffer zone around the pending 
calculations for resources on a particular queue.


> Consider container request size during CS preemption
> 
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0, 2.8.0, 2.7.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4390) Consider container request size during CS preemption

2015-11-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025079#comment-15025079
 ] 

Sunil G commented on YARN-4390:
---

Hi [~eepayne]
Thank you for raising this one. We also ran in to many of this use cases while 
testing preemption. And such cases are annoying.

Adding to the use case, these 8 selected container can possibly run in multiple 
nodes too. This will result reservation for the requesting app and further set 
of preemption in next round.

As I see, YARN-4108 is trying to do a lazy preemption approach. If I am not 
wrong, scheduler will be able to detect whether a certain set of preemption to 
satisfy one huge request will be acceptable or not. If not, another set of 
preemption unit to be considered. preemption unit means collection of 
potentially to-be-preempted containers. 
May be I feel, YARN-4108 is a common solution for all such cases. Could you 
also please check that.
 cc/[~leftnoteasy] for clarifying.

> Consider container request size during CS preemption
> 
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0, 2.8.0, 2.7.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4304) AM max resource configuration per partition to be displayed/updated correctly in UI and in various partition related metrics

2015-11-24 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4304:
--
Attachment: REST_and_UI.zip

Attaching REST  o/p and UI screen shots.

> AM max resource configuration per partition to be displayed/updated correctly 
> in UI and in various partition related metrics
> 
>
> Key: YARN-4304
> URL: https://issues.apache.org/jira/browse/YARN-4304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4304.patch, REST_and_UI.zip
>
>
> As we are supporting per-partition level max AM resource percentage 
> configuration, UI and various metrics also need to display correct 
> configurations related to same. 
> For eg: Current UI still shows am-resource percentage per queue level. This 
> is to be updated correctly when label config is used.
> - Display max-am-percentage per-partition in Scheduler UI (label also) and in 
> ClusterMetrics page
> - Update queue/partition related metrics w.r.t per-partition 
> am-resource-percentage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3862) Decide which contents to retrieve and send back in response in TimelineReader

2015-11-24 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025219#comment-15025219
 ] 

Sangjin Lee commented on YARN-3862:
---

I agree that currently the config or metric ids are used directly as the column 
names and what is being done in the patch is probably correct, and we would get 
the same result if we went with *ColumnPrefix.getColumnPrefixBytes().

I think one of the reasons that we still want to leverage *ColumnPrefix is 
because that way we're basically insulated against future changes. If we went 
with the approach that the patch proposes and the column name format for config 
or metric should change later, we would need to remember to visit 
TimelineFilterUtils and modify this method accordingly. That would be rather 
brittle.

Another interesting reason is consistency. Currently when configs and metrics 
are written, they go through ColumnHelper.getColumnQualifier() to create the 
column name bytes. ColumnHelper properly encodes them if there are spaces for 
example. It would be consistent to treat them the same way for the read path. I 
don't know that we allow spaces in config or metric names (I don't think we 
discussed that possibility), but at least that way we'd be consistent.

My proposal for doing this was using the byte array returned by

{code}
EntityColumnPrefix.CONFIG.getColumnPrefixBytes(prefix_from_the_filter)
{code}

to use as argument to the BinaryPrefixComparator constructor. We'd need to work 
out how the column prefix can be passed into TimelineFilterUtils. Hope this 
helps.

While we're at it, can we also refactor the calls to 
ColumnHelper.getColumnQualifier() in ApplicationColumnPrefix.store(), 
EntityColumnPrefix.store(), etc. to use getColumnPrefixBytes()?

bq. So prefixes in createHBaseColQualPrefixFilter() can be anything and cannot 
be fetched via a call to ColumnPrefix.getColumnPrefixBytes().

I'm not quite sure under what scenario 
ColumnPrefix.getColumnPrefixBytes(prefix_passed_by_users) would not work for 
this purpose. Could you kindly elaborate?

bq. Maybe confs and metrics can be renamed as configsToRetrieve and 
metricsToRetrieve respectively. Thoughts ?

Those sound better.

{quote}
Current code is not hooked up to the REST layer, so it wont work end to end. 
However, the current patch has already become quite big. So we can handle REST 
related changes in another JIRA. I am fine with that.
{quote}

+1. We can put that in another JIRA.

> Decide which contents to retrieve and send back in response in TimelineReader
> -
>
> Key: YARN-3862
> URL: https://issues.apache.org/jira/browse/YARN-3862
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3862-YARN-2928.wip.01.patch, 
> YARN-3862-YARN-2928.wip.02.patch, YARN-3862-YARN-2928.wip.03.patch, 
> YARN-3862-feature-YARN-2928.wip.03.patch
>
>
> Currently, we will retrieve all the contents of the field if that field is 
> specified in the query API. In case of configs and metrics, this can become a 
> lot of data even though the user doesn't need it. So we need to provide a way 
> to query only a set of configs or metrics.
> As a comma spearated list of configs/metrics to be returned will be quite 
> cumbersome to specify, we have to support either of the following options :
> # Prefix match
> # Regex
> # Group the configs/metrics and query that group.
> We also need a facility to specify a metric time window to return metrics in 
> a that window. This may be useful in plotting graphs 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4304) AM max resource configuration per partition to be displayed/updated correctly in UI and in various partition related metrics

2015-11-24 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4304:
--
Attachment: 0002-YARN-4304.patch

Attaching updated  version of patch addressing the comments.

- {{getAccessibleNodeLabels}}  can  have "*". hence in case when its ANY, we 
need to consider cluster labels (we can try see what all labels are having 
resources in that queue at that time). Patch contains this change
- It seems there was an existing bug in showing capacities for REST api 
{{/ws/v1/cluster/scheduler}} o/p when labels were enabled. Currently it shows 
only default label. Changes in {{CapacitySchedulerQueueInfo}}. I handled this  
fix also here, if needed I can spin to another ticket as its for nodelabels for 
general case. pls advise.

[~leftnoteasy] could you please help to check the patch.

> AM max resource configuration per partition to be displayed/updated correctly 
> in UI and in various partition related metrics
> 
>
> Key: YARN-4304
> URL: https://issues.apache.org/jira/browse/YARN-4304
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4304.patch, 0002-YARN-4304.patch, 
> REST_and_UI.zip
>
>
> As we are supporting per-partition level max AM resource percentage 
> configuration, UI and various metrics also need to display correct 
> configurations related to same. 
> For eg: Current UI still shows am-resource percentage per queue level. This 
> is to be updated correctly when label config is used.
> - Display max-am-percentage per-partition in Scheduler UI (label also) and in 
> ClusterMetrics page
> - Update queue/partition related metrics w.r.t per-partition 
> am-resource-percentage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4391) Exception is logged while accessing scheduler RM web UI when Node Labels are configured

2015-11-24 Thread Sunil G (JIRA)
Sunil G created YARN-4391:
-

 Summary: Exception is logged while accessing scheduler RM web UI 
when Node Labels are configured
 Key: YARN-4391
 URL: https://issues.apache.org/jira/browse/YARN-4391
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 2.7.1
Reporter: Sunil G
Assignee: Sunil G


While RM Scheduler Web UI with node label configuration causes an exception 
from hamlet framework. {{QueuesBlock#render}} has thrown this error as 
{{nestLevel}} were mis matching. Attaching stack trace.
{noformat}
2015-11-23 20:52:40,249 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error 
handling URI: /cluster/scheduler
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:153)


Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error rendering 
block: nestLevel=8 expected 5
at 
org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:71)
at 
org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
at 
org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
at 
org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$TD._(Hamlet.java:845)
at 
org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:56)
at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
at org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.scheduler(RmController.java:82)
... 47 more
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4365) FileSystemNodeLabelStore should check for root dir existence on startup

2015-11-24 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-4365:
--
Attachment: YARN-4365-2.patch

Attaching revised patch, with a better test case and changing the scope of 
setFileSystem() from private to package private.

> FileSystemNodeLabelStore should check for root dir existence on startup
> ---
>
> Key: YARN-4365
> URL: https://issues.apache.org/jira/browse/YARN-4365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Kuhu Shukla
> Attachments: YARN-4365-1.patch, YARN-4365-2.patch
>
>
> If the namenode is in safe mode for some reason then FileSystemNodeLabelStore 
> will prevent the RM from starting since it unconditionally tries to create 
> the root directory for the label store.  If the root directory already exists 
> and no labels are changing then we shouldn't prevent the RM from starting 
> even if the namenode is in safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4365) FileSystemNodeLabelStore should check for root dir existence on startup

2015-11-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025311#comment-15025311
 ] 

Hadoop QA commented on YARN-4365:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_85 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_85. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12774142/YARN-4365-2.patch |
| JIRA Issue | YARN-4365 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d650dc1aa099 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / db4cab2 |
| findbugs | v3.0.0 |
| JDK v1.7.0_85  Test Results | 
https://builds.ap

[jira] [Commented] (YARN-4365) FileSystemNodeLabelStore should check for root dir existence on startup

2015-11-24 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025325#comment-15025325
 ] 

Kuhu Shukla commented on YARN-4365:
---

[~jlowe], Request for comments/review. Thanks a lot.

> FileSystemNodeLabelStore should check for root dir existence on startup
> ---
>
> Key: YARN-4365
> URL: https://issues.apache.org/jira/browse/YARN-4365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Kuhu Shukla
> Attachments: YARN-4365-1.patch, YARN-4365-2.patch
>
>
> If the namenode is in safe mode for some reason then FileSystemNodeLabelStore 
> will prevent the RM from starting since it unconditionally tries to create 
> the root directory for the label store.  If the root directory already exists 
> and no labels are changing then we shouldn't prevent the RM from starting 
> even if the namenode is in safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4375) CapacityScheduler needs more debug logging for why queues don't get containers

2015-11-24 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated YARN-4375:
---
Attachment: YARN-4375.patch

[~sunilg], thanks for pointing me to those ongoing efforts. What I want to 
accomplish in this jira is to simply add more debug logging of what might go 
wrong when allocate container to queue. 
I have upload a patch which add more debug logging in regular container 
allocator to indicate problems of allocating containers.

> CapacityScheduler needs more debug logging for why queues don't get containers
> --
>
> Key: YARN-4375
> URL: https://issues.apache.org/jira/browse/YARN-4375
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4375.patch
>
>
> CapacityScheduler needs more debug logging for why queues don't get containers



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4387) Fix typo in FairScheduler log message

2015-11-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025403#comment-15025403
 ] 

Hudson commented on YARN-4387:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2575 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2575/])
YARN-4387. Fix typo in FairScheduler log message. Contributed by Xin (ozawa: 
rev 28dfe721b86ccbaf2ddcfb7e709b226ac766803a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* hadoop-yarn-project/CHANGES.txt


> Fix typo in FairScheduler log message
> -
>
> Key: YARN-4387
> URL: https://issues.apache.org/jira/browse/YARN-4387
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Xin Wang
>Assignee: Xin Wang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4387.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4365) FileSystemNodeLabelStore should check for root dir existence on startup

2015-11-24 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025470#comment-15025470
 ] 

Jason Lowe commented on YARN-4365:
--

Rather than poking the store into the manager object, we don't need the manager 
object at all.  We can just test the store object directly.  Note how the test 
is simply using the manager object as a place to hold the store but only ever 
manipulates the store directly by reaching into the manager object.


> FileSystemNodeLabelStore should check for root dir existence on startup
> ---
>
> Key: YARN-4365
> URL: https://issues.apache.org/jira/browse/YARN-4365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Kuhu Shukla
> Attachments: YARN-4365-1.patch, YARN-4365-2.patch
>
>
> If the namenode is in safe mode for some reason then FileSystemNodeLabelStore 
> will prevent the RM from starting since it unconditionally tries to create 
> the root directory for the label store.  If the root directory already exists 
> and no labels are changing then we shouldn't prevent the RM from starting 
> even if the namenode is in safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4365) FileSystemNodeLabelStore should check for root dir existence on startup

2015-11-24 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-4365:
--
Attachment: YARN-4365-3.patch

Thank you [~jlowe]. Updated the patch.

> FileSystemNodeLabelStore should check for root dir existence on startup
> ---
>
> Key: YARN-4365
> URL: https://issues.apache.org/jira/browse/YARN-4365
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Kuhu Shukla
> Attachments: YARN-4365-1.patch, YARN-4365-2.patch, YARN-4365-3.patch
>
>
> If the namenode is in safe mode for some reason then FileSystemNodeLabelStore 
> will prevent the RM from starting since it unconditionally tries to create 
> the root directory for the label store.  If the root directory already exists 
> and no labels are changing then we shouldn't prevent the RM from starting 
> even if the namenode is in safe mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4334) Ability to avoid ResourceManager recovery if state store is "too old"

2015-11-24 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated YARN-4334:
---
Attachment: YARN-4334.4.2.patch

> Ability to avoid ResourceManager recovery if state store is "too old"
> -
>
> Key: YARN-4334
> URL: https://issues.apache.org/jira/browse/YARN-4334
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Jason Lowe
>Assignee: Chang Li
> Attachments: YARN-4334.2.patch, YARN-4334.3.patch, 
> YARN-4334.4.2.patch, YARN-4334.4.patch, YARN-4334.patch, 
> YARN-4334.wip.2.patch, YARN-4334.wip.3.patch, YARN-4334.wip.4.patch, 
> YARN-4334.wip.patch
>
>
> There are times when a ResourceManager has been down long enough that 
> ApplicationMasters and potentially external client-side monitoring mechanisms 
> have given up completely.  If the ResourceManager starts back up and tries to 
> recover we can get into situations where the RM launches new application 
> attempts for the AMs that gave up, but then the client _also_ launches 
> another instance of the app because it assumed everything was dead.
> It would be nice if the RM could be optionally configured to avoid trying to 
> recover if the state store was "too old."  The RM would come up without any 
> applications recovered, but we would avoid a double-submission situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4390) Consider container request size during CS preemption

2015-11-24 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025542#comment-15025542
 ] 

Bikas Saha commented on YARN-4390:
--

I am not sure if this is a bug as described. If preemption does free 8x1GB 
containers then it will create 8GB free space on the node. The scheduler (which 
is container request size aware) should then allocate 1x8GB container to the 
under-allocated AM. [~curino] Is that correct? Of course there could be a bug 
in the implementation but by design, this should not happening.

However, if YARN ends up preempting 8x1GB containers on different nodes then 
the under-allocated AM will not get its resources and may result in further 
avoidable preemptions. This is [~sunilg] case.

> Consider container request size during CS preemption
> 
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0, 2.8.0, 2.7.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4132) Nodemanagers should try harder to connect to the RM

2015-11-24 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-4132:
-
Component/s: nodemanager
 Issue Type: Improvement  (was: Bug)

+1 latest patch lgtm.  Committing this.

> Nodemanagers should try harder to connect to the RM
> ---
>
> Key: YARN-4132
> URL: https://issues.apache.org/jira/browse/YARN-4132
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4132.2.patch, YARN-4132.3.patch, YARN-4132.4.patch, 
> YARN-4132.5.patch, YARN-4132.6.2.patch, YARN-4132.6.patch, YARN-4132.7.patch, 
> YARN-4132.patch
>
>
> Being part of the cluster, nodemanagers should try very hard (and possibly 
> never give up) to connect to a resourcemanager. Minimally we should have a 
> separate config to set how aggressively a nodemanager will connect to the RM 
> separate from what clients will do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3980) Plumb resource-utilization info in node heartbeat through to the scheduler

2015-11-24 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025583#comment-15025583
 ] 

Inigo Goiri commented on YARN-3980:
---

Thank you for taking care of this!

> Plumb resource-utilization info in node heartbeat through to the scheduler
> --
>
> Key: YARN-3980
> URL: https://issues.apache.org/jira/browse/YARN-3980
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, scheduler
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: YARN-3980-v0.patch, YARN-3980-v1.patch, 
> YARN-3980-v2.patch, YARN-3980-v3.patch, YARN-3980-v4.patch, 
> YARN-3980-v5.patch, YARN-3980-v6.patch, YARN-3980-v7.patch, 
> YARN-3980-v8.patch, YARN-3980-v9.patch
>
>
> YARN-1012 and YARN-3534 collect resource utilization information for all 
> containers and the node respectively and send it to the RM on node heartbeat. 
> We should plumb it through to the scheduler so the scheduler can make use of 
> it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4132) Separate configs for nodemanager to resourcemanager connection timeout and retries

2015-11-24 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-4132:
-
Hadoop Flags: Reviewed
 Summary: Separate configs for nodemanager to resourcemanager 
connection timeout and retries  (was: Nodemanagers should try harder to connect 
to the RM)

> Separate configs for nodemanager to resourcemanager connection timeout and 
> retries
> --
>
> Key: YARN-4132
> URL: https://issues.apache.org/jira/browse/YARN-4132
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4132.2.patch, YARN-4132.3.patch, YARN-4132.4.patch, 
> YARN-4132.5.patch, YARN-4132.6.2.patch, YARN-4132.6.patch, YARN-4132.7.patch, 
> YARN-4132.patch
>
>
> Being part of the cluster, nodemanagers should try very hard (and possibly 
> never give up) to connect to a resourcemanager. Minimally we should have a 
> separate config to set how aggressively a nodemanager will connect to the RM 
> separate from what clients will do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4343) Need to support Application History Server on ATSV2

2015-11-24 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025614#comment-15025614
 ] 

Sangjin Lee commented on YARN-4343:
---

So if I understood correctly, what we're discussing here is to support YARN CLI 
support for handling YARN-generic app information coming from the timeline 
service storage, correct? We're *NOT* envisioning a separate notion of AHS 
here, correct?

I'm in agreement that we should see if we can do it on top of the REST API.

Could you come up with a simple proposal of what needs to be done and share it? 
I think that will help us moving forward the discussion. Thanks!

> Need to support Application History Server on ATSV2
> ---
>
> Key: YARN-4343
> URL: https://issues.apache.org/jira/browse/YARN-4343
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>
> AHS is used by the CLI and Webproxy(REST), if the application related 
> information is not found in RM then it tries to fetch from AHS and show



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4343) Need to support Application History Server on ATSV2

2015-11-24 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025619#comment-15025619
 ] 

Sangjin Lee commented on YARN-4343:
---

And if so, you might want to reword the title of this JIRA to summarize this 
more accurately. It wasn't very clear to me when I first saw it.

> Need to support Application History Server on ATSV2
> ---
>
> Key: YARN-4343
> URL: https://issues.apache.org/jira/browse/YARN-4343
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>
> AHS is used by the CLI and Webproxy(REST), if the application related 
> information is not found in RM then it tries to fetch from AHS and show



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >