[jira] [Updated] (YARN-2314) ContainerManagementProtocolProxy can create thousands of threads for a large cluster

2014-10-16 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated YARN-2314:
---
Attachment: tez-yarn-2314.xlsx

Attaching the results of getProxy() call for tez with 20 nodes with this patch 
for different cache sizes and for different data sizes (tested a job @200GB and 
10 TB scale).  Overall, there is slight degradation in performance (in 
milliseconds) by setting cache size to 0, but not significant to make an impact 
in overall job runtime in tez.

 ContainerManagementProtocolProxy can create thousands of threads for a large 
 cluster
 

 Key: YARN-2314
 URL: https://issues.apache.org/jira/browse/YARN-2314
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: YARN-2314.patch, YARN-2314v2.patch, 
 disable-cm-proxy-cache.patch, nmproxycachefix.prototype.patch, 
 tez-yarn-2314.xlsx


 ContainerManagementProtocolProxy has a cache of NM proxies, and the size of 
 this cache is configurable.  However the cache can grow far beyond the 
 configured size when running on a large cluster and blow AM address/container 
 limits.  More details in the first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2314) ContainerManagementProtocolProxy can create thousands of threads for a large cluster

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173581#comment-14173581
 ] 

Hadoop QA commented on YARN-2314:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675252/tez-yarn-2314.xlsx
  against trunk revision 2894433.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5418//console

This message is automatically generated.

 ContainerManagementProtocolProxy can create thousands of threads for a large 
 cluster
 

 Key: YARN-2314
 URL: https://issues.apache.org/jira/browse/YARN-2314
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.1.0-beta
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: YARN-2314.patch, YARN-2314v2.patch, 
 disable-cm-proxy-cache.patch, nmproxycachefix.prototype.patch, 
 tez-yarn-2314.xlsx


 ContainerManagementProtocolProxy has a cache of NM proxies, and the size of 
 this cache is configurable.  However the cache can grow far beyond the 
 configured size when running on a large cluster and blow AM address/container 
 limits.  More details in the first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-1879:
-
Attachment: YARN-1879.26.patch

Rebased on trunk.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
 YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2496) Changes for capacity scheduler to support allocate resource respect labels

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173643#comment-14173643
 ] 

Hudson commented on YARN-2496:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #713 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/713/])
YARN-2496. Enhanced Capacity Scheduler to have basic support for allocating 
resources based on node-labels. Contributed by Wangda Tan. (vinodkv: rev 
f2ea555ac6c06a3f2f6559731f48711fff05d3f1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationSystemTestUtil.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationACLs.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/Queue.java
* 

[jira] [Commented] (YARN-2312) Marking ContainerId#getId as deprecated

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173640#comment-14173640
 ] 

Hudson commented on YARN-2312:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #713 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/713/])
YARN-2312. Deprecated old ContainerId#getId API and updated MapReduce to use 
ContainerId#getContainerId instead. Contributed by Tsuyoshi OZAWA (jianhe: rev 
0af1a2b5bc1469ba22edb63cd58f9b436b1dc4d3)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/TestYarnServerApiClasses.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestContainerResourceUsage.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/WrappedJvmID.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalizedResource.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestCheckpointPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerIdPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStoreTestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JVMId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestMemoryApplicationHistoryStore.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java
* hadoop-yarn-project/CHANGES.txt


 Marking ContainerId#getId as deprecated
 ---

 Key: YARN-2312
 URL: https://issues.apache.org/jira/browse/YARN-2312
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Fix For: 2.6.0

 Attachments: YARN-2312-branch-2.8.patch, YARN-2312-wip.patch, 
 

[jira] [Commented] (YARN-2685) Resource on each label not correct when multiple NMs in a same host and some has label some not

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173647#comment-14173647
 ] 

Hudson commented on YARN-2685:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #713 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/713/])
YARN-2685. Fixed a bug in CommonNodeLabelsManager that caused wrong resource 
tracking per label when a host runs multiple node-managers. Contributed by 
Wangda Tan. (vinodkv: rev b3056c266a628a65cf7ceb61b55ab3bd0a09baf2)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestRMNodeLabelsManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/CommonNodeLabelsManager.java
* hadoop-yarn-project/CHANGES.txt


 Resource on each label not correct when multiple NMs in a same host and some 
 has label some not
 ---

 Key: YARN-2685
 URL: https://issues.apache.org/jira/browse/YARN-2685
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.6.0

 Attachments: YARN-2685-20141013.1.patch


 I noticed there's one issue, when we have multiple NMs running in a same 
 host, (say NM1-4 running in host1). And we specify some of them has label and 
 some not, the total resource on label is not correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173661#comment-14173661
 ] 

Hadoop QA commented on YARN-1879:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675257/YARN-1879.26.patch
  against trunk revision 2894433.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.yarn.client.TestResourceTrackerOnHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5419//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5419//console

This message is automatically generated.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
 YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2398) TestResourceTrackerOnHA crashes

2014-10-16 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2398:
-
Attachment: TestResourceTrackerOnHA-output.txt

Reproduced the issue on my local. Attaching log.

 TestResourceTrackerOnHA crashes
 ---

 Key: YARN-2398
 URL: https://issues.apache.org/jira/browse/YARN-2398
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jason Lowe
 Attachments: TestResourceTrackerOnHA-output.txt


 TestResourceTrackerOnHA is currently crashing and failing trunk builds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173728#comment-14173728
 ] 

Tsuyoshi OZAWA commented on YARN-1879:
--

The test failure is not related to the patch and being filed as YARN-2398 - it 
still fails without patch.  [~jianhe], [~kkambatl] could you review latest 
patch?

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
 YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2312) Marking ContainerId#getId as deprecated

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173760#comment-14173760
 ] 

Hudson commented on YARN-2312:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1903 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1903/])
YARN-2312. Deprecated old ContainerId#getId API and updated MapReduce to use 
ContainerId#getContainerId instead. Contributed by Tsuyoshi OZAWA (jianhe: rev 
0af1a2b5bc1469ba22edb63cd58f9b436b1dc4d3)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JVMId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/TestYarnServerApiClasses.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestCheckpointPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestContainerResourceUsage.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestMemoryApplicationHistoryStore.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/WrappedJvmID.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerIdPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalizedResource.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStoreTestUtils.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java


 Marking ContainerId#getId as deprecated
 ---

 Key: YARN-2312
 URL: https://issues.apache.org/jira/browse/YARN-2312
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Fix For: 2.6.0

 Attachments: YARN-2312-branch-2.8.patch, YARN-2312-wip.patch, 
 

[jira] [Commented] (YARN-2496) Changes for capacity scheduler to support allocate resource respect labels

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173763#comment-14173763
 ] 

Hudson commented on YARN-2496:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1903 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1903/])
YARN-2496. Enhanced Capacity Scheduler to have basic support for allocating 
resources based on node-labels. Contributed by Wangda Tan. (vinodkv: rev 
f2ea555ac6c06a3f2f6559731f48711fff05d3f1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationSystemTestUtil.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMHA.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueMappings.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationACLs.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCSQueueUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
* 

[jira] [Commented] (YARN-2685) Resource on each label not correct when multiple NMs in a same host and some has label some not

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173767#comment-14173767
 ] 

Hudson commented on YARN-2685:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1903 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1903/])
YARN-2685. Fixed a bug in CommonNodeLabelsManager that caused wrong resource 
tracking per label when a host runs multiple node-managers. Contributed by 
Wangda Tan. (vinodkv: rev b3056c266a628a65cf7ceb61b55ab3bd0a09baf2)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestRMNodeLabelsManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/CommonNodeLabelsManager.java
* hadoop-yarn-project/CHANGES.txt


 Resource on each label not correct when multiple NMs in a same host and some 
 has label some not
 ---

 Key: YARN-2685
 URL: https://issues.apache.org/jira/browse/YARN-2685
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.6.0

 Attachments: YARN-2685-20141013.1.patch


 I noticed there's one issue, when we have multiple NMs running in a same 
 host, (say NM1-4 running in host1). And we specify some of them has label and 
 some not, the total resource on label is not correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2496) Changes for capacity scheduler to support allocate resource respect labels

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173773#comment-14173773
 ] 

Hudson commented on YARN-2496:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1928 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1928/])
YARN-2496. Enhanced Capacity Scheduler to have basic support for allocating 
resources based on node-labels. Contributed by Wangda Tan. (vinodkv: rev 
f2ea555ac6c06a3f2f6559731f48711fff05d3f1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueMappings.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationACLs.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/TestRMAppAttemptTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/PlanQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCSQueueUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationSystemTestUtil.java
* 

[jira] [Commented] (YARN-2312) Marking ContainerId#getId as deprecated

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173770#comment-14173770
 ] 

Hudson commented on YARN-2312:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1928 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1928/])
YARN-2312. Deprecated old ContainerId#getId API and updated MapReduce to use 
ContainerId#getContainerId instead. Contributed by Tsuyoshi OZAWA (jianhe: rev 
0af1a2b5bc1469ba22edb63cd58f9b436b1dc4d3)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalizedResource.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryStoreTestUtils.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestCheckpointPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/TestYarnServerApiClasses.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/local/LocalContainerAllocator.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestMemoryApplicationHistoryStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerIdPBImpl.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JVMId.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/WrappedJvmID.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNM.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestContainerResourceUsage.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/MRAppBenchmark.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestFileSystemApplicationHistoryStore.java


 Marking ContainerId#getId as deprecated
 ---

 Key: YARN-2312
 URL: https://issues.apache.org/jira/browse/YARN-2312
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Fix For: 2.6.0

 Attachments: YARN-2312-branch-2.8.patch, 

[jira] [Commented] (YARN-2685) Resource on each label not correct when multiple NMs in a same host and some has label some not

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173777#comment-14173777
 ] 

Hudson commented on YARN-2685:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1928 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1928/])
YARN-2685. Fixed a bug in CommonNodeLabelsManager that caused wrong resource 
tracking per label when a host runs multiple node-managers. Contributed by 
Wangda Tan. (vinodkv: rev b3056c266a628a65cf7ceb61b55ab3bd0a09baf2)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/CommonNodeLabelsManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/nodelabels/TestRMNodeLabelsManager.java


 Resource on each label not correct when multiple NMs in a same host and some 
 has label some not
 ---

 Key: YARN-2685
 URL: https://issues.apache.org/jira/browse/YARN-2685
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.6.0

 Attachments: YARN-2685-20141013.1.patch


 I noticed there's one issue, when we have multiple NMs running in a same 
 host, (say NM1-4 running in host1). And we specify some of them has label and 
 some not, the total resource on label is not correct.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-2678) Recommended improvements to Yarn Registry

2014-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned YARN-2678:


Assignee: Steve Loughran

 Recommended improvements to Yarn Registry
 -

 Key: YARN-2678
 URL: https://issues.apache.org/jira/browse/YARN-2678
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Reporter: Gour Saha
Assignee: Steve Loughran

 In the process of binding to Slider AM from Slider agent python code here are 
 some of the items I stumbled upon and would recommend as improvements.
 This is how the Slider's registry looks today -
 {noformat}
 jsonservicerec{
   description : Slider Application Master,
   external : [ {
 api : org.apache.slider.appmaster,
 addressType : host/port,
 protocolType : hadoop/protobuf,
 addresses : [ [ c6408.ambari.apache.org, 34837 ] ]
   }, {
 api : org.apache.http.UI,
 addressType : uri,
 protocolType : webui,
 addresses : [ [ http://c6408.ambari.apache.org:43314; ] ]
   }, {
 api : org.apache.slider.management,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 http://c6408.ambari.apache.org:43314/ws/v1/slider/mgmt; ] ]
   }, {
 api : org.apache.slider.publisher,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 http://c6408.ambari.apache.org:43314/ws/v1/slider/publisher; ] ]
   }, {
 api : org.apache.slider.registry,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 http://c6408.ambari.apache.org:43314/ws/v1/slider/registry; ] ]
   }, {
 api : org.apache.slider.publisher.configurations,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 http://c6408.ambari.apache.org:43314/ws/v1/slider/publisher/slider; ] ]
   } ],
   internal : [ {
 api : org.apache.slider.agents.secure,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 https://c6408.ambari.apache.org:46958/ws/v1/slider/agents; ] ]
   }, {
 api : org.apache.slider.agents.oneway,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 https://c6408.ambari.apache.org:57513/ws/v1/slider/agents; ] ]
   } ],
   yarn:persistence : application,
   yarn:id : application_1412974695267_0015
 }
 {noformat}
 Recommendations:
 1. I would suggest to either remove the string 
 {color:red}jsonservicerec{color} or if it is desirable to have a non-null 
 data at all times then loop the string into the json structure as a top-level 
 attribute to ensure that the registry data is always a valid json document. 
 2. The {color:red}addresses{color} attribute is currently a list of list. I 
 would recommend to convert it to a list of dictionary objects. In the 
 dictionary object it would be nice to have the host and port portions of 
 objects of addressType uri as separate key-value pairs to avoid parsing on 
 the client side. The URI should also be retained as a key say uri to avoid 
 clients trying to generate it by concatenating host, port, resource-path, 
 etc. Here is a proposed structure -
 {noformat}
 {
   ...
   internal : [ {
 api : org.apache.slider.agents.secure,
 addressType : uri,
 protocolType : REST,
 addresses : [ 
{ uri : https://c6408.ambari.apache.org:46958/ws/v1/slider/agents;,
  host : c6408.ambari.apache.org,
  port: 46958
}
 ]
   } 
   ],
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2504) Support get/add/remove/change labels in RM admin CLI

2014-10-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173939#comment-14173939
 ] 

Wangda Tan commented on YARN-2504:
--

[~sunilg], 
an empty response of RPC protocol is as same as void of a Java method. And 
any exception thrown will be wrapped as YarnException so client can get it.

 Support get/add/remove/change labels in RM admin CLI 
 -

 Key: YARN-2504
 URL: https://issues.apache.org/jira/browse/YARN-2504
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2504-20141015-1.patch, YARN-2504.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2678) Recommended improvements to Yarn Registry

2014-10-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173945#comment-14173945
 ] 

Steve Loughran commented on YARN-2678:
--

Gour: I'll do these, as they seem goood and they need to be done before the 
registry ships with hadoop.

w.r.t the header in the ZK nodes, it's to deal with the problems
* znodes with 0 bytes of data still have a stated size of 12 bytes. Requiring 
16 bytes of header makes it trivial to decide whether or not it has data (makes 
enuming child records faster)
* allows for future expansion to have different record types.

Here's what I propose
# drop the header
# add a {{type}} field to the json
# mandate a service record type declaration
{code}
type:ServiceRecord-1.0.0
{code}
# declare that the presence of the byte sequence {{ServiceRecord-1.0.0}} 
implies the entry is a sevice record. JSON is UTF-8 encoded, so this matches 
the value of the {{type}} field.
# if the string is present, the entry MUST be parseable as a service record
# declare that the absence of the sequence implies that there is no service 
record there.
{code}

Parsing/validating becomes one of
{code}
 if len(data)len(ServiceRecord-1.0.0):  raise NotFound
 if not contains(data,ServiceRecord-1.0.0):  raise NotFound
 if not parse(data) raise InvalidRecord
 if not valid(parse(data)) raise InvalidRecord
{code}

That is, if there isn't the string, it is not a parse error, it is simply not 
a record

Validation becomes
# assert presence of {{type:ServiceRecord-1.0.0}}
# forall endpoints, {{valid(endpoint)}}
# endpoints are valid if they follow the structure, al elements in the 
dictionary of an address are simple strings, etc.
that's it: all other fields are optional

I'll update the .tla file and the code to match

 Recommended improvements to Yarn Registry
 -

 Key: YARN-2678
 URL: https://issues.apache.org/jira/browse/YARN-2678
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Reporter: Gour Saha
Assignee: Steve Loughran

 In the process of binding to Slider AM from Slider agent python code here are 
 some of the items I stumbled upon and would recommend as improvements.
 This is how the Slider's registry looks today -
 {noformat}
 jsonservicerec{
   description : Slider Application Master,
   external : [ {
 api : org.apache.slider.appmaster,
 addressType : host/port,
 protocolType : hadoop/protobuf,
 addresses : [ [ c6408.ambari.apache.org, 34837 ] ]
   }, {
 api : org.apache.http.UI,
 addressType : uri,
 protocolType : webui,
 addresses : [ [ http://c6408.ambari.apache.org:43314; ] ]
   }, {
 api : org.apache.slider.management,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 http://c6408.ambari.apache.org:43314/ws/v1/slider/mgmt; ] ]
   }, {
 api : org.apache.slider.publisher,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 http://c6408.ambari.apache.org:43314/ws/v1/slider/publisher; ] ]
   }, {
 api : org.apache.slider.registry,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 http://c6408.ambari.apache.org:43314/ws/v1/slider/registry; ] ]
   }, {
 api : org.apache.slider.publisher.configurations,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 http://c6408.ambari.apache.org:43314/ws/v1/slider/publisher/slider; ] ]
   } ],
   internal : [ {
 api : org.apache.slider.agents.secure,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 https://c6408.ambari.apache.org:46958/ws/v1/slider/agents; ] ]
   }, {
 api : org.apache.slider.agents.oneway,
 addressType : uri,
 protocolType : REST,
 addresses : [ [ 
 https://c6408.ambari.apache.org:57513/ws/v1/slider/agents; ] ]
   } ],
   yarn:persistence : application,
   yarn:id : application_1412974695267_0015
 }
 {noformat}
 Recommendations:
 1. I would suggest to either remove the string 
 {color:red}jsonservicerec{color} or if it is desirable to have a non-null 
 data at all times then loop the string into the json structure as a top-level 
 attribute to ensure that the registry data is always a valid json document. 
 2. The {color:red}addresses{color} attribute is currently a list of list. I 
 would recommend to convert it to a list of dictionary objects. In the 
 dictionary object it would be nice to have the host and port portions of 
 objects of addressType uri as separate key-value pairs to avoid parsing on 
 the client side. The URI should also be retained as a key say uri to avoid 
 clients trying to generate it by concatenating host, port, resource-path, 
 etc. Here is a proposed structure -
 {noformat}
 {
   ...
   internal : [ {
 api : org.apache.slider.agents.secure,
 addressType : uri,
 

[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174042#comment-14174042
 ] 

Jian He commented on YARN-1879:
---

looks good overall, one minor comment in the test 
- To avoid manually injecting the token, we could save the ugi object while 
doing MockAM#register,and use the same object for unregistering, 
{code}
// Saving a token for retry.
TokenAMRMTokenIdentifier token =
rm1.getRMContext().getRMApps().get(
am0.getApplicationAttemptId().getApplicationId())
.getRMAppAttempt(am0.getApplicationAttemptId()).getAMRMToken();
{code}

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
 YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2588) Standby RM does not transitionToActive if previous transitionToActive is failed with ZK exception.

2014-10-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174062#comment-14174062
 ] 

Jian He commented on YARN-2588:
---

Thanks for updating !
bq. I thought of removing completely activeservice initializition for 
trainsitionToStandby, but it has potential dependency on starting RMWebApp. I 
do not know why RMWebbApp has dependency on activeServices for starting in 
standby mode.
Didn't quite get what you mean. RMWebApp is started in both standby and active.

 Standby RM does not transitionToActive if previous transitionToActive is 
 failed with ZK exception.
 --

 Key: YARN-2588
 URL: https://issues.apache.org/jira/browse/YARN-2588
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 2.6.0, 2.5.1
Reporter: Rohith
Assignee: Rohith
 Attachments: YARN-2588.1.patch, YARN-2588.patch


 Consider scenario where, StandBy RM is failed to transition to Active because 
 of ZK exception(connectionLoss or SessionExpired). Then any further 
 transition to Active for same RM does not move RM to Active state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-1879:
-
Attachment: YARN-1879.27.patch

Updated to reuse ugi object before and after restart.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.27.patch, YARN-1879.3.patch, YARN-1879.4.patch, 
 YARN-1879.5.patch, YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, 
 YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-2699:


 Summary: Fix test timeout in 
TestResourceTrackerOnHA#testResourceTrackerOnHA
 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan


Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
manager with port=0 is not allowed. 
TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2676) Timeline authentication filter should add support for proxy user

2014-10-16 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-2676:
--
Attachment: YARN-2676.2.patch

Added the test case that does the end-to-end verification when kerberos 
authentication is enabled for the timeline server.

 Timeline authentication filter should add support for proxy user
 

 Key: YARN-2676
 URL: https://issues.apache.org/jira/browse/YARN-2676
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2676.1.patch, YARN-2676.2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174199#comment-14174199
 ] 

Hadoop QA commented on YARN-1879:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675326/YARN-1879.27.patch
  against trunk revision 2894433.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens
  
org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage

  The following test timeouts occurred in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.yarn.client.TestResourceTrackerOnHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5420//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5420//console

This message is automatically generated.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.27.patch, YARN-1879.3.patch, YARN-1879.4.patch, 
 YARN-1879.5.patch, YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, 
 YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2699:
-
Attachment: YARN-2699-20141016-1.patch

Attached a fix for this

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2699-20141016-1.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2689) TestSecureRMRegistryOperations failing on windows

2014-10-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174254#comment-14174254
 ] 

Steve Loughran commented on YARN-2689:
--

applying this patch, but still seeing other problems which surface once the ZK 
registry is actually up and running in secure mode

 TestSecureRMRegistryOperations failing on windows
 -

 Key: YARN-2689
 URL: https://issues.apache.org/jira/browse/YARN-2689
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.6.0
 Environment: Windows server, Java 7, ZK 3.4.6
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-2689-001.patch


 the micro ZK service used in the {{TestSecureRMRegistryOperations}} test 
 doesnt start on windows, 
 {code}
 org.apache.hadoop.service.ServiceStateException: java.io.IOException: Could 
 not configure server because SASL configuration did not allow the  ZooKeeper 
 server to authenticate itself properly: 
 javax.security.auth.login.LoginException: Unable to obtain password from user
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2689) TestSecureRMRegistryOperations failing on windows: secure ZK won't start

2014-10-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-2689:
-
Summary: TestSecureRMRegistryOperations failing on windows: secure ZK won't 
start  (was: TestSecureRMRegistryOperations failing on windows)

 TestSecureRMRegistryOperations failing on windows: secure ZK won't start
 

 Key: YARN-2689
 URL: https://issues.apache.org/jira/browse/YARN-2689
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.6.0
 Environment: Windows server, Java 7, ZK 3.4.6
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-2689-001.patch


 the micro ZK service used in the {{TestSecureRMRegistryOperations}} test 
 doesnt start on windows, 
 {code}
 org.apache.hadoop.service.ServiceStateException: java.io.IOException: Could 
 not configure server because SASL configuration did not allow the  ZooKeeper 
 server to authenticate itself properly: 
 javax.security.auth.login.LoginException: Unable to obtain password from user
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-1879:
-
Attachment: YARN-1879.28.patch

Fixed the test failure of TestContainerResourceUsage and TestClientToAMTokens.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.27.patch, YARN-1879.28.patch, 
 YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, YARN-1879.6.patch, 
 YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2689) TestSecureRMRegistryOperations failing on windows: secure ZK won't start

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174282#comment-14174282
 ] 

Hudson commented on YARN-2689:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6273 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6273/])
YARN-2689 TestSecureRMRegistryOperations failing on windows: secure ZK won't 
start (stevel: rev 6f43491c0343cfef36e9be5dfd06447cf2fee377)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureRegistry.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/AbstractSecureRegistryTest.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/CuratorService.java


 TestSecureRMRegistryOperations failing on windows: secure ZK won't start
 

 Key: YARN-2689
 URL: https://issues.apache.org/jira/browse/YARN-2689
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.6.0
 Environment: Windows server, Java 7, ZK 3.4.6
Reporter: Steve Loughran
Assignee: Steve Loughran
 Fix For: 2.6.0

 Attachments: YARN-2689-001.patch


 the micro ZK service used in the {{TestSecureRMRegistryOperations}} test 
 doesnt start on windows, 
 {code}
 org.apache.hadoop.service.ServiceStateException: java.io.IOException: Could 
 not configure server because SASL configuration did not allow the  ZooKeeper 
 server to authenticate itself properly: 
 javax.security.auth.login.LoginException: Unable to obtain password from user
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2700) TestSecureRMRegistryOperations failing on windows: auth problems

2014-10-16 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-2700:


 Summary: TestSecureRMRegistryOperations failing on windows: auth 
problems
 Key: YARN-2700
 URL: https://issues.apache.org/jira/browse/YARN-2700
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.6.0
 Environment: Windows Server, Win7
Reporter: Steve Loughran
Assignee: Steve Loughran


TestSecureRMRegistryOperations failing on windows: unable to create the root 
/registry path with permissions problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174404#comment-14174404
 ] 

Hadoop QA commented on YARN-1879:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675357/YARN-1879.28.patch
  against trunk revision 6f43491.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5422//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5422//console

This message is automatically generated.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.27.patch, YARN-1879.28.patch, 
 YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, YARN-1879.6.patch, 
 YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2699:
-
Priority: Blocker  (was: Major)

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2699-20141016-1.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-570) Time strings are formated in different timezone

2014-10-16 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174423#comment-14174423
 ] 

Ray Chiang commented on YARN-570:
-

Verified.  The time formatting looks uniform to me.

 Time strings are formated in different timezone
 ---

 Key: YARN-570
 URL: https://issues.apache.org/jira/browse/YARN-570
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 2.2.0
Reporter: Peng Zhang
Assignee: Akira AJISAKA
 Attachments: MAPREDUCE-5141.patch, YARN-570.2.patch, 
 YARN-570.3.patch, YARN-570.4.patch, YARN-570.5.patch


 Time strings on different page are displayed in different timezone.
 If it is rendered by renderHadoopDate() in yarn.dt.plugins.js, it appears as 
 Wed, 10 Apr 2013 08:29:56 GMT
 If it is formatted by format() in yarn.util.Times, it appears as 10-Apr-2013 
 16:29:56
 Same value, but different timezone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174437#comment-14174437
 ] 

Vinod Kumar Vavilapalli commented on YARN-2699:
---

Can you paste the exception? Not sure what the issue is here.

IAC, not sure if changing NodeId to be non-zero is enough. In the future, 
others will write tests which will run into the same problem. I get a feeling, 
there is a code issue in how labels are handled when a node's port is zero.

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2699-20141016-1.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2621) Simplify the output when the user doesn't have the access for getDomain(s)

2014-10-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174449#comment-14174449
 ] 

Jian He commented on YARN-2621:
---

+1,  Thanks Li for reviewing the patch !


 Simplify the output when the user doesn't have the access for getDomain(s) 
 ---

 Key: YARN-2621
 URL: https://issues.apache.org/jira/browse/YARN-2621
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: 2.6.0
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2621.1.patch


 Per discussion in 
 [YARN-2446|https://issues.apache.org/jira/browse/YARN-2446?focusedCommentId=14151272page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14151272],
  we should simply reject the user if it doesn't have access the domain(s), 
 instead of returning the entity without detail information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2621) Simplify the output when the user doesn't have the access for getDomain(s)

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174475#comment-14174475
 ] 

Hudson commented on YARN-2621:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6274 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6274/])
YARN-2621. Simplify the output when the user doesn't have the access for 
getDomain(s). Contributed by Zhijie Shen (jianhe: rev 
233d446be1bc1bc77c7c1c45386086a732897afd)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/TimelineDataManager.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestTimelineWebServices.java


 Simplify the output when the user doesn't have the access for getDomain(s) 
 ---

 Key: YARN-2621
 URL: https://issues.apache.org/jira/browse/YARN-2621
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: 2.6.0
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Fix For: 2.6.0

 Attachments: YARN-2621.1.patch


 Per discussion in 
 [YARN-2446|https://issues.apache.org/jira/browse/YARN-2446?focusedCommentId=14151272page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14151272],
  we should simply reject the user if it doesn't have access the domain(s), 
 instead of returning the entity without detail information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174480#comment-14174480
 ] 

Wangda Tan commented on YARN-2699:
--

The exception is,
{code}
2014-10-16 16:41:11,681 FATAL resourcemanager.ResourceManager 
(ResourceManager.java:run(668)) - Error in handling event type NODE_ADDED to 
the scheduler
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager.activateNode(RMNodeLabelsManager.java:186)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.addNode(CapacityScheduler.java:1124)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1035)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:659)
at java.lang.Thread.run(Thread.java:744)
{code}

And attached a new fix handle the case that some test cases may unintentionally 
specified a nodeId with port = 0, we shouldn't raise any exception in 
NodeLabelManager side.

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2699-20141016-1.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2699:
-
Attachment: YARN-2699-20141016-2.patch

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2699-20141016-1.patch, YARN-2699-20141016-2.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2699:
-
Attachment: YARN-2699-20141016-3.patch

Shouldn't change timeout of another test, attached ver.3

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2699-20141016-1.patch, YARN-2699-20141016-2.patch, 
 YARN-2699-20141016-3.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2701) Potential race condition in startLocalizer when using LinuxContainerExecutor

2014-10-16 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-2701:
---

 Summary: Potential race condition in startLocalizer when using 
LinuxContainerExecutor  
 Key: YARN-2701
 URL: https://issues.apache.org/jira/browse/YARN-2701
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong


When using LinuxContainerExecutor do startLocalizer, we are using native code 
container-executor.c. 
{code}
 if (stat(npath, sb) != 0) {
   if (mkdir(npath, perm) != 0) {
{code}
We are using check and create method to create the appDir under /usercache. But 
if there are two containers trying to do this at the same time, race condition 
may happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2701) Potential race condition in startLocalizer when using LinuxContainerExecutor

2014-10-16 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174496#comment-14174496
 ] 

Xuan Gong commented on YARN-2701:
-

To solve this problem, we need to change the native code:
When mkdir call fails, we need to check the exception type. If the type is 
FileAlreadyExist, we should check whether the permission of the file is the 
same as the desired permission. If both of them are true, we should not fail 
the localization process.

 Potential race condition in startLocalizer when using LinuxContainerExecutor  
 --

 Key: YARN-2701
 URL: https://issues.apache.org/jira/browse/YARN-2701
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong

 When using LinuxContainerExecutor do startLocalizer, we are using native code 
 container-executor.c. 
 {code}
  if (stat(npath, sb) != 0) {
if (mkdir(npath, perm) != 0) {
 {code}
 We are using check and create method to create the appDir under /usercache. 
 But if there are two containers trying to do this at the same time, race 
 condition may happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2701) Potential race condition in startLocalizer when using LinuxContainerExecutor

2014-10-16 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-2701:

Attachment: YARN-2701.1.patch

 Potential race condition in startLocalizer when using LinuxContainerExecutor  
 --

 Key: YARN-2701
 URL: https://issues.apache.org/jira/browse/YARN-2701
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-2701.1.patch


 When using LinuxContainerExecutor do startLocalizer, we are using native code 
 container-executor.c. 
 {code}
  if (stat(npath, sb) != 0) {
if (mkdir(npath, perm) != 0) {
 {code}
 We are using check and create method to create the appDir under /usercache. 
 But if there are two containers trying to do this at the same time, race 
 condition may happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2505) Support get/add/remove/change labels in RM REST API

2014-10-16 Thread Craig Welch (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174508#comment-14174508
 ] 

Craig Welch commented on YARN-2505:
---

So, suggesting a slight modification to achieve greater restiness, and for 
practical reasons - I think PUT and DELETE operations should operate on only a 
single element in a path-oriented style, POST with a serialized payload should 
be used for batch operations.  I think that addition and removal of labels on a 
node do not presently need a batch operation.  So:

GET .../cluster/node-label/a 
return value indicates presence or absense of a

PUT .../cluster/node-label/a 
creates a new node label, a

DELETE .../cluster/node-label/a 
deletes and existing node label, a

POST .../cluster/node-labels 
(serialized data) adds multiple labels in an operation

GET .../cluster/node-labels
returns multiple labels as serialized data (all labels)

GET .../cluster/node/id/label/a
indicates existance of label on node by return value

PUT .../cluster/node/id/label/a
ads label a to node id

DELETE .../cluster/node/id/label/a
deletes label a from node id

GET .../cluster/node/id/labels
returns serialized set of all labels for node

(I don't think we need the post for individual nodes, if we did it, it would be 
a serialized set of labels to add to the node at: POST 
.../cluster/node/id/labels)

I notice that what I think should be /node/ is presently /nodes/, I could keep 
it so for consistency (but I'd rather do the above, which is what I think it 
should be for individual node manipulations)

 Support get/add/remove/change labels in RM REST API
 ---

 Key: YARN-2505
 URL: https://issues.apache.org/jira/browse/YARN-2505
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Craig Welch
 Attachments: YARN-2505.1.patch, YARN-2505.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2701) Potential race condition in startLocalizer when using LinuxContainerExecutor

2014-10-16 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-2701:
--
Priority: Blocker  (was: Major)
Target Version/s: 2.6.0

Wow, this wasn't the case before. Marking this regression as a blocker.

I just traced it down to YARN-2161 - we need to look at the patch again.

 Potential race condition in startLocalizer when using LinuxContainerExecutor  
 --

 Key: YARN-2701
 URL: https://issues.apache.org/jira/browse/YARN-2701
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong
Priority: Blocker
 Attachments: YARN-2701.1.patch


 When using LinuxContainerExecutor do startLocalizer, we are using native code 
 container-executor.c. 
 {code}
  if (stat(npath, sb) != 0) {
if (mkdir(npath, perm) != 0) {
 {code}
 We are using check and create method to create the appDir under /usercache. 
 But if there are two containers trying to do this at the same time, race 
 condition may happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2701) Potential race condition in startLocalizer when using LinuxContainerExecutor

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174544#comment-14174544
 ] 

Hadoop QA commented on YARN-2701:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675390/YARN-2701.1.patch
  against trunk revision b0d6ac9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5425//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5425//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5425//console

This message is automatically generated.

 Potential race condition in startLocalizer when using LinuxContainerExecutor  
 --

 Key: YARN-2701
 URL: https://issues.apache.org/jira/browse/YARN-2701
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong
Priority: Blocker
 Attachments: YARN-2701.1.patch


 When using LinuxContainerExecutor do startLocalizer, we are using native code 
 container-executor.c. 
 {code}
  if (stat(npath, sb) != 0) {
if (mkdir(npath, perm) != 0) {
 {code}
 We are using check and create method to create the appDir under /usercache. 
 But if there are two containers trying to do this at the same time, race 
 condition may happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2505) Support get/add/remove/change labels in RM REST API

2014-10-16 Thread Sumit Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174548#comment-14174548
 ] 

Sumit Kumar commented on YARN-2505:
---

Looks like there is concern on increasing scope by introducing notion of type 
of a label, if so let me know if i should open up a new JIRA and add it as 
child to YARN-1963

Re: new api proposal by [~cwelch] i think there should be an operation to put a 
single label on multiple nodes. let's say i want to hotlist certain nodes for 
maintenance, i would want them all to be labelled together. [~cwelch], what do 
you think?

 Support get/add/remove/change labels in RM REST API
 ---

 Key: YARN-2505
 URL: https://issues.apache.org/jira/browse/YARN-2505
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Craig Welch
 Attachments: YARN-2505.1.patch, YARN-2505.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2683) document registry config options

2014-10-16 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174552#comment-14174552
 ] 

Gour Saha commented on YARN-2683:
-

Steve here are few comments on the documentation -

Section: *Setting the Zookeeper Registry Base path: hadoop.registry.zk.root*
- In this section, it will be good to see {{/registry}} inside a code block. 
Also line 2 and 3 could be merged into a single simpler line like - The default 
value of /registry is normally sufficient.

Section: *Identifying the system accounts hadoop.registry.system.acls*
- Point 5: I think you might have meant To aid portability {color:red}of 
this{color} setting
- In this section the 2 code blocks which provides sample values of 
{{hadoop.registry.system.acls}} and {{hadoop.registry.kerberos.realm}} should 
be wrapped inside property elements, just like all the other similar 
snippets. The opening description elements are missing too.

It might be helpful to give a sample path that will get created under the root 
in zk registry (for a sample application)? Like for slider we have 
{{/registry/users/yarn/services/org-apache-slider/cl1}}

 document registry config options
 

 Key: YARN-2683
 URL: https://issues.apache.org/jira/browse/YARN-2683
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-2683-001.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Add to {{yarn-site}} a page on registry configuration parameters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2504) Support get/add/remove/change labels in RM admin CLI

2014-10-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2504:
-
Attachment: YARN-2504-20141016-1.patch

Hi [~vinodkv],
Thanks for reviewing this,
bq. In the documentation of -directlyAccessNodeLabelStore, say that today it 
only works if you are logged into the machine where RM is running.
Actually, it should be also fine if the node labels store in HDFS and the 
command runs not in the same machine of RM, added explanation on the option

And other comments are all addressed,
New patch attached,

Wangda

 Support get/add/remove/change labels in RM admin CLI 
 -

 Key: YARN-2504
 URL: https://issues.apache.org/jira/browse/YARN-2504
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2504-20141015-1.patch, YARN-2504-20141016-1.patch, 
 YARN-2504.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2423) TimelineClient should wrap all GET APIs to facilitate Java users

2014-10-16 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-2423:

Attachment: YARN-2423.patch

The new patch fixes the javadoc warnings and TestMemoryTimelineStore.  

However, the fixes for TestMemoryTimelineStore broke the 
TestLeveldbTimelineStore tests because the bug that I fixed in the 
MemoryTimelineStore seems to also exist in the LeveldbTimelineStore, but I'm 
not sure how to fix it there.  The bug is that if you query the store for an 
entity, the relatedEntities are always empty.  This was easy enough to fix in 
the MemoryTimelineStore, but in the LeveldbTimelineStore, I was only able to 
partially fix it after some guessing (relatedEntities are returned, but not if 
you're using a primaryFilter).

 TimelineClient should wrap all GET APIs to facilitate Java users
 

 Key: YARN-2423
 URL: https://issues.apache.org/jira/browse/YARN-2423
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Robert Kanter
 Attachments: YARN-2423.patch, YARN-2423.patch


 TimelineClient provides the Java method to put timeline entities. It's also 
 good to wrap over all GET APIs (both entity and domain), and deserialize the 
 json response into Java POJO objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2423) TimelineClient should wrap all GET APIs to facilitate Java users

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174562#comment-14174562
 ] 

Hadoop QA commented on YARN-2423:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675404/YARN-2423.patch
  against trunk revision b0d6ac9.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5426//console

This message is automatically generated.

 TimelineClient should wrap all GET APIs to facilitate Java users
 

 Key: YARN-2423
 URL: https://issues.apache.org/jira/browse/YARN-2423
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Robert Kanter
 Attachments: YARN-2423.patch, YARN-2423.patch


 TimelineClient provides the Java method to put timeline entities. It's also 
 good to wrap over all GET APIs (both entity and domain), and deserialize the 
 json response into Java POJO objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM fail over

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174565#comment-14174565
 ] 

Hadoop QA commented on YARN-1879:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675357/YARN-1879.28.patch
  against trunk revision 233d446.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

  The following test timeouts occurred in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.yarn.client.TestResourceTrackerOnHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5423//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5423//console

This message is automatically generated.

 Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol for RM 
 fail over
 

 Key: YARN-1879
 URL: https://issues.apache.org/jira/browse/YARN-1879
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Jian He
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
 YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
 YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
 YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
 YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
 YARN-1879.21.patch, YARN-1879.22.patch, YARN-1879.23.patch, 
 YARN-1879.23.patch, YARN-1879.24.patch, YARN-1879.25.patch, 
 YARN-1879.26.patch, YARN-1879.27.patch, YARN-1879.28.patch, 
 YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, YARN-1879.6.patch, 
 YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2682) WindowsSecureContainerExecutor should not depend on DefaultContainerExecutor#getFirstApplicationDir.

2014-10-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174569#comment-14174569
 ] 

Jian He commented on YARN-2682:
---

looks good , +1

 WindowsSecureContainerExecutor should not depend on 
 DefaultContainerExecutor#getFirstApplicationDir. 
 -

 Key: YARN-2682
 URL: https://issues.apache.org/jira/browse/YARN-2682
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: YARN-2682.000.patch, YARN-2682.001.patch


 DefaultContainerExecutor won't use getFirstApplicationDir any more. But we 
 can't delete getFirstApplicationDir in DefaultContainerExecutor because 
 WindowsSecureContainerExecutor uses it.
 We should move getFirstApplicationDir function from DefaultContainerExecutor 
 to WindowsSecureContainerExecutor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2504) Support get/add/remove/change labels in RM admin CLI

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174583#comment-14174583
 ] 

Hadoop QA commented on YARN-2504:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675405/YARN-2504-20141016-1.patch
  against trunk revision b0d6ac9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5427//console

This message is automatically generated.

 Support get/add/remove/change labels in RM admin CLI 
 -

 Key: YARN-2504
 URL: https://issues.apache.org/jira/browse/YARN-2504
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2504-20141015-1.patch, YARN-2504-20141016-1.patch, 
 YARN-2504.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2423) TimelineClient should wrap all GET APIs to facilitate Java users

2014-10-16 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-2423:

Attachment: (was: YARN-2423.patch)

 TimelineClient should wrap all GET APIs to facilitate Java users
 

 Key: YARN-2423
 URL: https://issues.apache.org/jira/browse/YARN-2423
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Robert Kanter
 Attachments: YARN-2423.patch, YARN-2423.patch


 TimelineClient provides the Java method to put timeline entities. It's also 
 good to wrap over all GET APIs (both entity and domain), and deserialize the 
 json response into Java POJO objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2423) TimelineClient should wrap all GET APIs to facilitate Java users

2014-10-16 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-2423:

Attachment: YARN-2423.patch

Oops, I generated the patch backwards.  New patch is correct.

 TimelineClient should wrap all GET APIs to facilitate Java users
 

 Key: YARN-2423
 URL: https://issues.apache.org/jira/browse/YARN-2423
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Robert Kanter
 Attachments: YARN-2423.patch, YARN-2423.patch


 TimelineClient provides the Java method to put timeline entities. It's also 
 good to wrap over all GET APIs (both entity and domain), and deserialize the 
 json response into Java POJO objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2495) Allow admin specify labels in each NM (Distributed configuration)

2014-10-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174584#comment-14174584
 ] 

Allen Wittenauer commented on YARN-2495:


bq. I understood the use case but what i did not understand is how would it 
restrict/deter a user, as he can do one more updation ; one more label to the 
central valid label list, like java version or jdk version etc. As anyway 
script will be written/updated to get specific set of labels so i feel in most 
cases admin can know what lables will be coming in the cluster. Any other use 
case where it will be difficult for admin to list the labels before hand ?

I don't think you understand the use case at all.  In fact, it's clear you need 
to re-read the sample script.  It does *not* get updated with every new JDK.  
It's smart enough to update the label regardless of the JDK that is 
installed... which means the *only* friction to operations is point is going to 
be updating this 'valid label list' on the RM.  

 Allow admin specify labels in each NM (Distributed configuration)
 -

 Key: YARN-2495
 URL: https://issues.apache.org/jira/browse/YARN-2495
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Naganarasimha G R

 Target of this JIRA is to allow admin specify labels in each NM, this covers
 - User can set labels in each NM (by setting yarn-site.xml or using script 
 suggested by [~aw])
 - NM will send labels to RM via ResourceTracker API
 - RM will set labels in NodeLabelManager when NM register/update labels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174591#comment-14174591
 ] 

Hadoop QA commented on YARN-2699:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675388/YARN-2699-20141016-3.patch
  against trunk revision 233d446.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5424//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5424//console

This message is automatically generated.

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2699-20141016-1.patch, YARN-2699-20141016-2.patch, 
 YARN-2699-20141016-3.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2682) WindowsSecureContainerExecutor should not depend on DefaultContainerExecutor#getFirstApplicationDir.

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174590#comment-14174590
 ] 

Hudson commented on YARN-2682:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6276 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6276/])
YARN-2682. Updated WindowsSecureContainerExecutor to not use 
DefaultContainerExecutor#getFirstApplicationDir and use getWorkingDir() 
instead. Contributed by Zhihai Xu (jianhe: rev 
0fd0ebae645e671699f6a6a56a012ebe6dfb5b2a)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java


 WindowsSecureContainerExecutor should not depend on 
 DefaultContainerExecutor#getFirstApplicationDir. 
 -

 Key: YARN-2682
 URL: https://issues.apache.org/jira/browse/YARN-2682
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: YARN-2682.000.patch, YARN-2682.001.patch


 DefaultContainerExecutor won't use getFirstApplicationDir any more. But we 
 can't delete getFirstApplicationDir in DefaultContainerExecutor because 
 WindowsSecureContainerExecutor uses it.
 We should move getFirstApplicationDir function from DefaultContainerExecutor 
 to WindowsSecureContainerExecutor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2682) WindowsSecureContainerExecutor should not depend on DefaultContainerExecutor#getFirstApplicationDir.

2014-10-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174592#comment-14174592
 ] 

Jian He commented on YARN-2682:
---

committed to trunk and branch-2, but branch-2.6 has conflicts, [~zxu], could 
you provide a patch for branch-2.6 ? thx

 WindowsSecureContainerExecutor should not depend on 
 DefaultContainerExecutor#getFirstApplicationDir. 
 -

 Key: YARN-2682
 URL: https://issues.apache.org/jira/browse/YARN-2682
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: YARN-2682.000.patch, YARN-2682.001.patch


 DefaultContainerExecutor won't use getFirstApplicationDir any more. But we 
 can't delete getFirstApplicationDir in DefaultContainerExecutor because 
 WindowsSecureContainerExecutor uses it.
 We should move getFirstApplicationDir function from DefaultContainerExecutor 
 to WindowsSecureContainerExecutor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2504) Support get/add/remove/change labels in RM admin CLI

2014-10-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2504:
-
Attachment: YARN-2504-20141016-2.patch

 Support get/add/remove/change labels in RM admin CLI 
 -

 Key: YARN-2504
 URL: https://issues.apache.org/jira/browse/YARN-2504
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2504-20141015-1.patch, YARN-2504-20141016-1.patch, 
 YARN-2504-20141016-2.patch, YARN-2504.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2699:
-
Attachment: YARN-2699-20141016-4.patch

It seems the test running is crashed but no any error message in the log, 
resubmit the same patch to kick jenkins run again.

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2699-20141016-1.patch, YARN-2699-20141016-2.patch, 
 YARN-2699-20141016-3.patch, YARN-2699-20141016-4.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2682) WindowsSecureContainerExecutor should not depend on DefaultContainerExecutor#getFirstApplicationDir.

2014-10-16 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-2682:
--
Target Version/s: 2.7.0  (was: 2.6.0)

 WindowsSecureContainerExecutor should not depend on 
 DefaultContainerExecutor#getFirstApplicationDir. 
 -

 Key: YARN-2682
 URL: https://issues.apache.org/jira/browse/YARN-2682
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: YARN-2682.000.patch, YARN-2682.001.patch


 DefaultContainerExecutor won't use getFirstApplicationDir any more. But we 
 can't delete getFirstApplicationDir in DefaultContainerExecutor because 
 WindowsSecureContainerExecutor uses it.
 We should move getFirstApplicationDir function from DefaultContainerExecutor 
 to WindowsSecureContainerExecutor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2682) WindowsSecureContainerExecutor should not depend on DefaultContainerExecutor#getFirstApplicationDir.

2014-10-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174616#comment-14174616
 ] 

Jian He commented on YARN-2682:
---

never mind, it's conflicting with YARN-1972 which is not committed in 2.6

 WindowsSecureContainerExecutor should not depend on 
 DefaultContainerExecutor#getFirstApplicationDir. 
 -

 Key: YARN-2682
 URL: https://issues.apache.org/jira/browse/YARN-2682
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: YARN-2682.000.patch, YARN-2682.001.patch


 DefaultContainerExecutor won't use getFirstApplicationDir any more. But we 
 can't delete getFirstApplicationDir in DefaultContainerExecutor because 
 WindowsSecureContainerExecutor uses it.
 We should move getFirstApplicationDir function from DefaultContainerExecutor 
 to WindowsSecureContainerExecutor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2423) TimelineClient should wrap all GET APIs to facilitate Java users

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174632#comment-14174632
 ] 

Hadoop QA commented on YARN-2423:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675408/YARN-2423.patch
  against trunk revision 0fd0eba.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice:

  
org.apache.hadoop.yarn.server.timeline.TestLeveldbTimelineStore

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5428//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5428//console

This message is automatically generated.

 TimelineClient should wrap all GET APIs to facilitate Java users
 

 Key: YARN-2423
 URL: https://issues.apache.org/jira/browse/YARN-2423
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Robert Kanter
 Attachments: YARN-2423.patch, YARN-2423.patch


 TimelineClient provides the Java method to put timeline entities. It's also 
 good to wrap over all GET APIs (both entity and domain), and deserialize the 
 json response into Java POJO objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2682) WindowsSecureContainerExecutor should not depend on DefaultContainerExecutor#getFirstApplicationDir.

2014-10-16 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174639#comment-14174639
 ] 

zhihai xu commented on YARN-2682:
-

thanks [~jianhe] for reviewing and committing the patch and thanks [~rusanu] 
for the review.

 WindowsSecureContainerExecutor should not depend on 
 DefaultContainerExecutor#getFirstApplicationDir. 
 -

 Key: YARN-2682
 URL: https://issues.apache.org/jira/browse/YARN-2682
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: YARN-2682.000.patch, YARN-2682.001.patch


 DefaultContainerExecutor won't use getFirstApplicationDir any more. But we 
 can't delete getFirstApplicationDir in DefaultContainerExecutor because 
 WindowsSecureContainerExecutor uses it.
 We should move getFirstApplicationDir function from DefaultContainerExecutor 
 to WindowsSecureContainerExecutor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2682) WindowsSecureContainerExecutor should not depend on DefaultContainerExecutor#getFirstApplicationDir.

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174646#comment-14174646
 ] 

Hudson commented on YARN-2682:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6277 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6277/])
Moved YARN-2682 from 2.6 to 2.7 in CHANGES.txt (jianhe: rev 
72093fd8cb9865a27a96163f31d03d6813ce267f)
* hadoop-yarn-project/CHANGES.txt


 WindowsSecureContainerExecutor should not depend on 
 DefaultContainerExecutor#getFirstApplicationDir. 
 -

 Key: YARN-2682
 URL: https://issues.apache.org/jira/browse/YARN-2682
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Fix For: 2.7.0

 Attachments: YARN-2682.000.patch, YARN-2682.001.patch


 DefaultContainerExecutor won't use getFirstApplicationDir any more. But we 
 can't delete getFirstApplicationDir in DefaultContainerExecutor because 
 WindowsSecureContainerExecutor uses it.
 We should move getFirstApplicationDir function from DefaultContainerExecutor 
 to WindowsSecureContainerExecutor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2683) document registry config options

2014-10-16 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174648#comment-14174648
 ] 

Josh Elser commented on YARN-2683:
--

I made some grammar/punctuation changes in addition to some clarity on the 
ZooKeeper paths that [~ste...@apache.org] has [pulled into his feature 
branch|https://github.com/steveloughran/hadoop-trunk/pull/4] 

 document registry config options
 

 Key: YARN-2683
 URL: https://issues.apache.org/jira/browse/YARN-2683
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, resourcemanager
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-2683-001.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 Add to {{yarn-site}} a page on registry configuration parameters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2588) Standby RM does not transitionToActive if previous transitionToActive is failed with ZK exception.

2014-10-16 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174673#comment-14174673
 ] 

Rohith commented on YARN-2588:
--

In RMWebApp, we have code below where ApplicationACLManager and QueueACL 
managers are part of ActiveServices instances binded to RMWebApp. IIUC, these 
are not needed for starting in standby mode. I would like to know reason behind 
why these 2 active service instances are binded to RMWebApp.?
{code}
   if (rm != null) {
  bind(ResourceManager.class).toInstance(rm);
  bind(RMContext.class).toInstance(rm.getRMContext());
  bind(ApplicationACLsManager.class).toInstance(
  rm.getApplicationACLsManager());
  bind(QueueACLsManager.class).toInstance(rm.getQueueACLsManager());
}
{code}

 Standby RM does not transitionToActive if previous transitionToActive is 
 failed with ZK exception.
 --

 Key: YARN-2588
 URL: https://issues.apache.org/jira/browse/YARN-2588
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 2.6.0, 2.5.1
Reporter: Rohith
Assignee: Rohith
 Attachments: YARN-2588.1.patch, YARN-2588.patch


 Consider scenario where, StandBy RM is failed to transition to Active because 
 of ZK exception(connectionLoss or SessionExpired). Then any further 
 transition to Active for same RM does not move RM to Active state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2504) Support get/add/remove/change labels in RM admin CLI

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174677#comment-14174677
 ] 

Hadoop QA commented on YARN-2504:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675409/YARN-2504-20141016-2.patch
  against trunk revision 0fd0eba.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5429//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5429//console

This message is automatically generated.

 Support get/add/remove/change labels in RM admin CLI 
 -

 Key: YARN-2504
 URL: https://issues.apache.org/jira/browse/YARN-2504
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2504-20141015-1.patch, YARN-2504-20141016-1.patch, 
 YARN-2504-20141016-2.patch, YARN-2504.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2699) Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174678#comment-14174678
 ] 

Hadoop QA commented on YARN-2699:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675411/YARN-2699-20141016-4.patch
  against trunk revision 0fd0eba.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5430//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5430//console

This message is automatically generated.

 Fix test timeout in TestResourceTrackerOnHA#testResourceTrackerOnHA
 ---

 Key: YARN-2699
 URL: https://issues.apache.org/jira/browse/YARN-2699
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Reporter: Wangda Tan
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2699-20141016-1.patch, YARN-2699-20141016-2.patch, 
 YARN-2699-20141016-3.patch, YARN-2699-20141016-4.patch


 Because of changes by YARN-2500/YARN-2496/YARN-2494, now registering a node 
 manager with port=0 is not allowed. 
 TestResourceTrackerOnHA#testResourceTrackerOnHA will be failed since it 
 register a node manager with port = 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2504) Support get/add/remove/change labels in RM admin CLI

2014-10-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2504:
-
Attachment: YARN-2504-20141016-3.patch

Submit a same patch to kick Jenkins run again

 Support get/add/remove/change labels in RM admin CLI 
 -

 Key: YARN-2504
 URL: https://issues.apache.org/jira/browse/YARN-2504
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2504-20141015-1.patch, YARN-2504-20141016-1.patch, 
 YARN-2504-20141016-2.patch, YARN-2504-20141016-3.patch, YARN-2504.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2504) Support get/add/remove/change labels in RM admin CLI

2014-10-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174688#comment-14174688
 ] 

Wangda Tan commented on YARN-2504:
--

I realized the above test failure will be fixed by YARN-2699

 Support get/add/remove/change labels in RM admin CLI 
 -

 Key: YARN-2504
 URL: https://issues.apache.org/jira/browse/YARN-2504
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2504-20141015-1.patch, YARN-2504-20141016-1.patch, 
 YARN-2504-20141016-2.patch, YARN-2504-20141016-3.patch, YARN-2504.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2504) Support get/add/remove/change labels in RM admin CLI

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174718#comment-14174718
 ] 

Hadoop QA commented on YARN-2504:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675418/YARN-2504-20141016-3.patch
  against trunk revision 72093fd.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

  The following test timeouts occurred in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.yarn.client.TestResourceTrackerOnHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5431//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5431//console

This message is automatically generated.

 Support get/add/remove/change labels in RM admin CLI 
 -

 Key: YARN-2504
 URL: https://issues.apache.org/jira/browse/YARN-2504
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2504-20141015-1.patch, YARN-2504-20141016-1.patch, 
 YARN-2504-20141016-2.patch, YARN-2504-20141016-3.patch, YARN-2504.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2588) Standby RM does not transitionToActive if previous transitionToActive is failed with ZK exception.

2014-10-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174726#comment-14174726
 ] 

Jian He commented on YARN-2588:
---

I traced the code ApplicationACLsManager/QueueACLsManager were earlier used by 
RMWebService for injection. this binding seems not necessary anymore. But 
anyways, this doesn't matter too much? as these two are just two classes and 
they are not extending service.

another minor comment in the patch, add {{Assert.fail()}} after 
{{rm.adminService.transitionToActive(requestInfo);}}
{code}
try {
+  rm.adminService.transitionToActive(requestInfo);
+} catch (Exception e) {
+  assertTrue(Error when transitioning to Active mode.contains(e
+  .getMessage()));
+}
{code}

 Standby RM does not transitionToActive if previous transitionToActive is 
 failed with ZK exception.
 --

 Key: YARN-2588
 URL: https://issues.apache.org/jira/browse/YARN-2588
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 2.6.0, 2.5.1
Reporter: Rohith
Assignee: Rohith
 Attachments: YARN-2588.1.patch, YARN-2588.patch


 Consider scenario where, StandBy RM is failed to transition to Active because 
 of ZK exception(connectionLoss or SessionExpired). Then any further 
 transition to Active for same RM does not move RM to Active state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2588) Standby RM does not transitionToActive if previous transitionToActive is failed with ZK exception.

2014-10-16 Thread Rohith (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith updated YARN-2588:
-
Attachment: YARN-2588.2.patch

 Standby RM does not transitionToActive if previous transitionToActive is 
 failed with ZK exception.
 --

 Key: YARN-2588
 URL: https://issues.apache.org/jira/browse/YARN-2588
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 2.6.0, 2.5.1
Reporter: Rohith
Assignee: Rohith
 Attachments: YARN-2588.1.patch, YARN-2588.2.patch, YARN-2588.patch


 Consider scenario where, StandBy RM is failed to transition to Active because 
 of ZK exception(connectionLoss or SessionExpired). Then any further 
 transition to Active for same RM does not move RM to Active state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2588) Standby RM does not transitionToActive if previous transitionToActive is failed with ZK exception.

2014-10-16 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174748#comment-14174748
 ] 

Rohith commented on YARN-2588:
--

bq. But anyways, this doesn't matter too much? as these two are just two 
classes and they are not extending service.
Yes it does not matter.

 Standby RM does not transitionToActive if previous transitionToActive is 
 failed with ZK exception.
 --

 Key: YARN-2588
 URL: https://issues.apache.org/jira/browse/YARN-2588
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 2.6.0, 2.5.1
Reporter: Rohith
Assignee: Rohith
 Attachments: YARN-2588.1.patch, YARN-2588.2.patch, YARN-2588.patch


 Consider scenario where, StandBy RM is failed to transition to Active because 
 of ZK exception(connectionLoss or SessionExpired). Then any further 
 transition to Active for same RM does not move RM to Active state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2588) Standby RM does not transitionToActive if previous transitionToActive is failed with ZK exception.

2014-10-16 Thread Rohith (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174749#comment-14174749
 ] 

Rohith commented on YARN-2588:
--

bq. add Assert.fail() after rm.adminService.transitionToActive(requestInfo);
Done

I updated the patch. Please review

 Standby RM does not transitionToActive if previous transitionToActive is 
 failed with ZK exception.
 --

 Key: YARN-2588
 URL: https://issues.apache.org/jira/browse/YARN-2588
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0, 2.6.0, 2.5.1
Reporter: Rohith
Assignee: Rohith
 Attachments: YARN-2588.1.patch, YARN-2588.2.patch, YARN-2588.patch


 Consider scenario where, StandBy RM is failed to transition to Active because 
 of ZK exception(connectionLoss or SessionExpired). Then any further 
 transition to Active for same RM does not move RM to Active state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2673) Add retry for timeline client put APIs

2014-10-16 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-2673:
--
Summary: Add retry for timeline client put APIs  (was: Add retry for 
timeline client)

 Add retry for timeline client put APIs
 --

 Key: YARN-2673
 URL: https://issues.apache.org/jira/browse/YARN-2673
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu
 Attachments: YARN-2673-101414-1.patch, YARN-2673-101414-2.patch, 
 YARN-2673-101414.patch


 Timeline client now does not handle the case gracefully when the server is 
 down. Jobs from distributed shell may fail due to ATS restart. We may need to 
 add some retry mechanisms to the client. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)