[jira] [Commented] (YARN-2315) Should use setCurrentCapacity instead of setCapacity to configure used resource capacity for FairScheduler.

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14164828#comment-14164828
 ] 

Hadoop QA commented on YARN-2315:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673834/YARN-2315.002.patch
  against trunk revision 2a51494.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5343//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5343//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5343//console

This message is automatically generated.

 Should use setCurrentCapacity instead of setCapacity to configure used 
 resource capacity for FairScheduler.
 ---

 Key: YARN-2315
 URL: https://issues.apache.org/jira/browse/YARN-2315
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: zhihai xu
Assignee: zhihai xu
 Attachments: YARN-2315.001.patch, YARN-2315.002.patch, YARN-2315.patch


 Should use setCurrentCapacity instead of setCapacity to configure used 
 resource capacity for FairScheduler.
 In function getQueueInfo of FSQueue.java, we call setCapacity twice with 
 different parameters so the first call is overrode by the second call. 
 queueInfo.setCapacity((float) getFairShare().getMemory() /
 scheduler.getClusterResource().getMemory());
 queueInfo.setCapacity((float) getResourceUsage().getMemory() /
 scheduler.getClusterResource().getMemory());
 We should change the second setCapacity call to setCurrentCapacity to 
 configure the current used capacity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2656) RM web services authentication filter should add support for proxy user

2014-10-09 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14164866#comment-14164866
 ] 

Zhijie Shen commented on YARN-2656:
---

I did some investigation on the test failure. It seems not to be the problem 
the test case, but the issue of DelegationTokenManager, which allows user to 
set external tokenManager, but make the assumption that all the token is 
o.a.h.security.token.delegation.DelegationTokenIdentifier. RM send 
RMDelegationTokenIdentifier and serialize the token in one way, and now 
DelegationTokenManager receives the token, and deserialize it in another way, 
resulting the bug. Will file ticket for this issue.

 RM web services authentication filter should add support for proxy user
 ---

 Key: YARN-2656
 URL: https://issues.apache.org/jira/browse/YARN-2656
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-yarn-2656.0.patch, apache-yarn-2656.1.patch, 
 apache-yarn-2656.2.patch


 The DelegationTokenAuthenticationFilter adds support for doAs functionality. 
 The RMAuthenticationFilter should expose this as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2667) Fix the release auit warning caused by hadoop-yarn-registry

2014-10-09 Thread Yi Liu (JIRA)
Yi Liu created YARN-2667:


 Summary: Fix the release auit warning caused by 
hadoop-yarn-registry
 Key: YARN-2667
 URL: https://issues.apache.org/jira/browse/YARN-2667
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yi Liu
Priority: Minor


? 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/resources/.keep
Lines that start with ? in the release audit report indicate files that do 
not have an Apache license header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2667) Fix the release auit warning caused by hadoop-yarn-registry

2014-10-09 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated YARN-2667:
-
Attachment: YARN-2667.001.patch

 Fix the release auit warning caused by hadoop-yarn-registry
 ---

 Key: YARN-2667
 URL: https://issues.apache.org/jira/browse/YARN-2667
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yi Liu
Priority: Minor
 Attachments: YARN-2667.001.patch


 ? 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/resources/.keep
 Lines that start with ? in the release audit report indicate files that 
 do not have an Apache license header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2667) Fix the release auit warning caused by hadoop-yarn-registry

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14164952#comment-14164952
 ] 

Hadoop QA commented on YARN-2667:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673862/YARN-2667.001.patch
  against trunk revision 2a51494.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5344//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5344//console

This message is automatically generated.

 Fix the release auit warning caused by hadoop-yarn-registry
 ---

 Key: YARN-2667
 URL: https://issues.apache.org/jira/browse/YARN-2667
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yi Liu
Priority: Minor
 Attachments: YARN-2667.001.patch


 ? 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/resources/.keep
 Lines that start with ? in the release audit report indicate files that 
 do not have an Apache license header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2598) GHS should show N/A instead of null for the inaccessible information

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165039#comment-14165039
 ] 

Hudson commented on YARN-2598:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #706 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/706/])
YARN-2598 GHS should show N/A instead of null for the inaccessible information  
(Zhijie Shen via mayank) (mayank: rev df3becf0800d24d1fe773651abb16d29f8bc3fdc)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java


 GHS should show N/A instead of null for the inaccessible information
 

 Key: YARN-2598
 URL: https://issues.apache.org/jira/browse/YARN-2598
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: 2.6.0
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2598.1.patch, YARN-2598.2.patch


 When the user doesn't have the access to an application, the app attempt 
 information is not visible to the user. ClientRMService will output N/A, but 
 GHS is showing null, which is not user-friendly.
 {code}
 14/09/24 22:07:20 INFO impl.TimelineClientImpl: Timeline service address: 
 http://nn.example.com:8188/ws/v1/timeline/
 14/09/24 22:07:20 INFO client.RMProxy: Connecting to ResourceManager at 
 nn.example.com/240.0.0.11:8050
 14/09/24 22:07:21 INFO client.AHSProxy: Connecting to Application History 
 server at nn.example.com/240.0.0.11:10200
 Application Report : 
   Application-Id : application_1411586934799_0001
   Application-Name : Sleep job
   Application-Type : MAPREDUCE
   User : hrt_qa
   Queue : default
   Start-Time : 1411586956012
   Finish-Time : 1411586989169
   Progress : 100%
   State : FINISHED
   Final-State : SUCCEEDED
   Tracking-URL : null
   RPC Port : -1
   AM Host : null
   Aggregate Resource Allocation : N/A
   Diagnostics : null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2649) Flaky test TestAMRMRPCNodeUpdates

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165040#comment-14165040
 ] 

Hudson commented on YARN-2649:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #706 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/706/])
YARN-2649. Fixed TestAMRMRPCNodeUpdates test failure. Contributed by Ming Ma 
(jianhe: rev e16e25ab1beac89c8d8be4e9f2a7fbefe81d35f3)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* hadoop-yarn-project/CHANGES.txt


 Flaky test TestAMRMRPCNodeUpdates
 -

 Key: YARN-2649
 URL: https://issues.apache.org/jira/browse/YARN-2649
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.6.0

 Attachments: YARN-2649-2.patch, YARN-2649.patch


 Sometimes the test fails with the following error:
 testAMRMUnusableNodes(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates)
   Time elapsed: 41.73 sec   FAILURE!
 junit.framework.AssertionFailedError: AppAttempt state is not correct 
 (timedout) expected:ALLOCATED but was:SCHEDULED
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.MockAM.waitForState(MockAM.java:82)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:382)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates.testAMRMUnusableNodes(TestAMRMRPCNodeUpdates.java:125)
 When this happens, SchedulerEventType.NODE_UPDATE was processed before 
 RMAppAttemptEvent.ATTEMPT_ADDED was processed. That is possible, given the 
 test only waits for RMAppState.ACCEPTED before having NM sending heartbeat. 
 This can be reproduced using custom AsyncDispatcher with CountDownLatch. Here 
 is the log when this happens.
 {noformat}
 App State is : ACCEPTED
 2014-10-05 21:25:07,305 INFO  [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(670)) - 
 appattempt_1412569506932_0001_01 State change from NEW to SUBMITTED
 2014-10-05 21:25:07,305 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType:
  STATUS_UPDATE
 2014-10-05 21:25:07,305 DEBUG [AsyncDispatcher event handler] 
 rmnode.RMNodeImpl (RMNodeImpl.java:handle(384)) - Processing 127.0.0.1:1234 
 of type STATUS_UPDATE
 AppAttempt : appattempt_1412569506932_0001_01 State is : SUBMITTED 
 Waiting for state : ALLOCATED
 2014-10-05 21:25:07,306 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppAttemptAddedSchedulerEvent.EventType:
  APP_ATTEMPT_ADDED
 2014-10-05 21:25:07,328 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType:
  NODE_UPDATE
 2014-10-05 21:25:07,330 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent.EventType:
  ATTEMPT_ADDED
 2014-10-05 21:25:07,331 DEBUG [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(658)) - Processing 
 event for appattempt_1412569506932_0001_000
 001 of type ATTEMPT_ADDED
 2014-10-05 21:25:07,333 INFO  [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(670)) - 
 appattempt_1412569506932_0001_01 State change from SUBMITTED to SCHEDULED
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2652) add hadoop-yarn-registry package under hadoop-yarn

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165038#comment-14165038
 ] 

Hudson commented on YARN-2652:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #706 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/706/])
YARN-913 service registry: YARN-2652 add hadoop-yarn-registry package under 
hadoop-yarn (stevel: rev 6a326711aa27e84fd4c53937afc5c41a746ec65a)
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoRecordException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZKPathDumper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/tla/yarnregistry.tla
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistryOperationsService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java
* hadoop-yarn-project/hadoop-yarn/pom.xml
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/DeleteCompletionCallback.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/binding/TestRegistryPathUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/MicroZookeeperService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestRegistryRMOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestRegistrySecurityHelper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/binding/TestMarshalling.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/CuratorEventCatcher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZookeeperConfigOptions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/PersistencePolicies.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/index.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperationsFactory.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/AddressTypes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureRMRegistryOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/AbstractSecureRegistryTest.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/resources/.keep
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestMicroZookeeperService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestCuratorService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/ProtocolTypes.java
* .gitignore
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/YarnRegistryAttributes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/RegistryIOException.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/resources/log4j.properties
* 

[jira] [Commented] (YARN-913) Umbrella: Add a way to register long-lived services in a YARN cluster

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165041#comment-14165041
 ] 

Hudson commented on YARN-913:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #706 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/706/])
YARN-913 service registry: YARN-2652 add hadoop-yarn-registry package under 
hadoop-yarn (stevel: rev 6a326711aa27e84fd4c53937afc5c41a746ec65a)
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoRecordException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZKPathDumper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/tla/yarnregistry.tla
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistryOperationsService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java
* hadoop-yarn-project/hadoop-yarn/pom.xml
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/DeleteCompletionCallback.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/binding/TestRegistryPathUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/MicroZookeeperService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestRegistryRMOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestRegistrySecurityHelper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/binding/TestMarshalling.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/CuratorEventCatcher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZookeeperConfigOptions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/PersistencePolicies.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/index.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperationsFactory.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/AddressTypes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureRMRegistryOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/AbstractSecureRegistryTest.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/resources/.keep
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestMicroZookeeperService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestCuratorService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/ProtocolTypes.java
* .gitignore
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/YarnRegistryAttributes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/RegistryIOException.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/resources/log4j.properties
* 

[jira] [Commented] (YARN-2652) add hadoop-yarn-registry package under hadoop-yarn

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165143#comment-14165143
 ] 

Hudson commented on YARN-2652:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1896 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1896/])
YARN-913 service registry: YARN-2652 add hadoop-yarn-registry package under 
hadoop-yarn (stevel: rev 6a326711aa27e84fd4c53937afc5c41a746ec65a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestCuratorService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/RegistryPathStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/RegistryAdminService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestYarnPolicySelector.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestRegistrySecurityHelper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoRecordException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/PersistencePolicies.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/package-info.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/resources/log4j.properties
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestMicroZookeeperService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/RegistryTestHelper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperationsFactory.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/CuratorEventCatcher.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/resources/.keep
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/AddingCompositeService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/BindingInformation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/YarnRegistryAttributes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/RMRegistryOperationsService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestRegistryRMOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/InvalidPathnameException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryPathUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/KerberosConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZookeeperConfigOptions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/AbstractSecureRegistryTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistryInternalConstants.java
* 

[jira] [Commented] (YARN-913) Umbrella: Add a way to register long-lived services in a YARN cluster

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165146#comment-14165146
 ] 

Hudson commented on YARN-913:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1896 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1896/])
YARN-913 service registry: YARN-2652 add hadoop-yarn-registry package under 
hadoop-yarn (stevel: rev 6a326711aa27e84fd4c53937afc5c41a746ec65a)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestCuratorService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/RegistryPathStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/RegistryAdminService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestYarnPolicySelector.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestRegistrySecurityHelper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoRecordException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/PersistencePolicies.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/package-info.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/resources/log4j.properties
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/TestMicroZookeeperService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/RegistryTestHelper.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperationsFactory.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/CuratorEventCatcher.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/resources/.keep
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/AddingCompositeService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/BindingInformation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/yarn/YarnRegistryAttributes.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/RMRegistryOperationsService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistrySecurity.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestRegistryRMOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/InvalidPathnameException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryPathUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/KerberosConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZookeeperConfigOptions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/AbstractSecureRegistryTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistryInternalConstants.java
* 

[jira] [Commented] (YARN-2649) Flaky test TestAMRMRPCNodeUpdates

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165145#comment-14165145
 ] 

Hudson commented on YARN-2649:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1896 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1896/])
YARN-2649. Fixed TestAMRMRPCNodeUpdates test failure. Contributed by Ming Ma 
(jianhe: rev e16e25ab1beac89c8d8be4e9f2a7fbefe81d35f3)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* hadoop-yarn-project/CHANGES.txt


 Flaky test TestAMRMRPCNodeUpdates
 -

 Key: YARN-2649
 URL: https://issues.apache.org/jira/browse/YARN-2649
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.6.0

 Attachments: YARN-2649-2.patch, YARN-2649.patch


 Sometimes the test fails with the following error:
 testAMRMUnusableNodes(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates)
   Time elapsed: 41.73 sec   FAILURE!
 junit.framework.AssertionFailedError: AppAttempt state is not correct 
 (timedout) expected:ALLOCATED but was:SCHEDULED
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.MockAM.waitForState(MockAM.java:82)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:382)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates.testAMRMUnusableNodes(TestAMRMRPCNodeUpdates.java:125)
 When this happens, SchedulerEventType.NODE_UPDATE was processed before 
 RMAppAttemptEvent.ATTEMPT_ADDED was processed. That is possible, given the 
 test only waits for RMAppState.ACCEPTED before having NM sending heartbeat. 
 This can be reproduced using custom AsyncDispatcher with CountDownLatch. Here 
 is the log when this happens.
 {noformat}
 App State is : ACCEPTED
 2014-10-05 21:25:07,305 INFO  [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(670)) - 
 appattempt_1412569506932_0001_01 State change from NEW to SUBMITTED
 2014-10-05 21:25:07,305 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType:
  STATUS_UPDATE
 2014-10-05 21:25:07,305 DEBUG [AsyncDispatcher event handler] 
 rmnode.RMNodeImpl (RMNodeImpl.java:handle(384)) - Processing 127.0.0.1:1234 
 of type STATUS_UPDATE
 AppAttempt : appattempt_1412569506932_0001_01 State is : SUBMITTED 
 Waiting for state : ALLOCATED
 2014-10-05 21:25:07,306 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppAttemptAddedSchedulerEvent.EventType:
  APP_ATTEMPT_ADDED
 2014-10-05 21:25:07,328 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType:
  NODE_UPDATE
 2014-10-05 21:25:07,330 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent.EventType:
  ATTEMPT_ADDED
 2014-10-05 21:25:07,331 DEBUG [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(658)) - Processing 
 event for appattempt_1412569506932_0001_000
 001 of type ATTEMPT_ADDED
 2014-10-05 21:25:07,333 INFO  [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(670)) - 
 appattempt_1412569506932_0001_01 State change from SUBMITTED to SCHEDULED
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2598) GHS should show N/A instead of null for the inaccessible information

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165144#comment-14165144
 ] 

Hudson commented on YARN-2598:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1896 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1896/])
YARN-2598 GHS should show N/A instead of null for the inaccessible information  
(Zhijie Shen via mayank) (mayank: rev df3becf0800d24d1fe773651abb16d29f8bc3fdc)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java


 GHS should show N/A instead of null for the inaccessible information
 

 Key: YARN-2598
 URL: https://issues.apache.org/jira/browse/YARN-2598
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: 2.6.0
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2598.1.patch, YARN-2598.2.patch


 When the user doesn't have the access to an application, the app attempt 
 information is not visible to the user. ClientRMService will output N/A, but 
 GHS is showing null, which is not user-friendly.
 {code}
 14/09/24 22:07:20 INFO impl.TimelineClientImpl: Timeline service address: 
 http://nn.example.com:8188/ws/v1/timeline/
 14/09/24 22:07:20 INFO client.RMProxy: Connecting to ResourceManager at 
 nn.example.com/240.0.0.11:8050
 14/09/24 22:07:21 INFO client.AHSProxy: Connecting to Application History 
 server at nn.example.com/240.0.0.11:10200
 Application Report : 
   Application-Id : application_1411586934799_0001
   Application-Name : Sleep job
   Application-Type : MAPREDUCE
   User : hrt_qa
   Queue : default
   Start-Time : 1411586956012
   Finish-Time : 1411586989169
   Progress : 100%
   State : FINISHED
   Final-State : SUCCEEDED
   Tracking-URL : null
   RPC Port : -1
   AM Host : null
   Aggregate Resource Allocation : N/A
   Diagnostics : null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2649) Flaky test TestAMRMRPCNodeUpdates

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165236#comment-14165236
 ] 

Hudson commented on YARN-2649:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1921 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1921/])
YARN-2649. Fixed TestAMRMRPCNodeUpdates test failure. Contributed by Ming Ma 
(jianhe: rev e16e25ab1beac89c8d8be4e9f2a7fbefe81d35f3)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java


 Flaky test TestAMRMRPCNodeUpdates
 -

 Key: YARN-2649
 URL: https://issues.apache.org/jira/browse/YARN-2649
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.6.0

 Attachments: YARN-2649-2.patch, YARN-2649.patch


 Sometimes the test fails with the following error:
 testAMRMUnusableNodes(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates)
   Time elapsed: 41.73 sec   FAILURE!
 junit.framework.AssertionFailedError: AppAttempt state is not correct 
 (timedout) expected:ALLOCATED but was:SCHEDULED
   at junit.framework.Assert.fail(Assert.java:50)
   at junit.framework.Assert.failNotEquals(Assert.java:287)
   at junit.framework.Assert.assertEquals(Assert.java:67)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.MockAM.waitForState(MockAM.java:82)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:382)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCNodeUpdates.testAMRMUnusableNodes(TestAMRMRPCNodeUpdates.java:125)
 When this happens, SchedulerEventType.NODE_UPDATE was processed before 
 RMAppAttemptEvent.ATTEMPT_ADDED was processed. That is possible, given the 
 test only waits for RMAppState.ACCEPTED before having NM sending heartbeat. 
 This can be reproduced using custom AsyncDispatcher with CountDownLatch. Here 
 is the log when this happens.
 {noformat}
 App State is : ACCEPTED
 2014-10-05 21:25:07,305 INFO  [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(670)) - 
 appattempt_1412569506932_0001_01 State change from NEW to SUBMITTED
 2014-10-05 21:25:07,305 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeStatusEvent.EventType:
  STATUS_UPDATE
 2014-10-05 21:25:07,305 DEBUG [AsyncDispatcher event handler] 
 rmnode.RMNodeImpl (RMNodeImpl.java:handle(384)) - Processing 127.0.0.1:1234 
 of type STATUS_UPDATE
 AppAttempt : appattempt_1412569506932_0001_01 State is : SUBMITTED 
 Waiting for state : ALLOCATED
 2014-10-05 21:25:07,306 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppAttemptAddedSchedulerEvent.EventType:
  APP_ATTEMPT_ADDED
 2014-10-05 21:25:07,328 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent.EventType:
  NODE_UPDATE
 2014-10-05 21:25:07,330 DEBUG [AsyncDispatcher event handler] 
 event.AsyncDispatcher (AsyncDispatcher.java:dispatch(164)) - Dispatching the 
 event 
 org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent.EventType:
  ATTEMPT_ADDED
 2014-10-05 21:25:07,331 DEBUG [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(658)) - Processing 
 event for appattempt_1412569506932_0001_000
 001 of type ATTEMPT_ADDED
 2014-10-05 21:25:07,333 INFO  [AsyncDispatcher event handler] 
 attempt.RMAppAttemptImpl (RMAppAttemptImpl.java:handle(670)) - 
 appattempt_1412569506932_0001_01 State change from SUBMITTED to SCHEDULED
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2652) add hadoop-yarn-registry package under hadoop-yarn

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165234#comment-14165234
 ] 

Hudson commented on YARN-2652:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1921 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1921/])
YARN-913 service registry: YARN-2652 add hadoop-yarn-registry package under 
hadoop-yarn (stevel: rev 6a326711aa27e84fd4c53937afc5c41a746ec65a)
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
* hadoop-yarn-project/hadoop-yarn/pom.xml
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/operations/TestRegistryOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/AbstractRegistryTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/InvalidPathnameException.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryConstants.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/AbstractSecureRegistryTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/RegistryOperationsClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/AuthenticationFailedException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureRegistry.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/KerberosConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoChildrenForEphemeralsException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/MicroZookeeperService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/CuratorEventCatcher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/BindFlags.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/MicroZookeeperServiceKeys.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistryOperationsService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestRegistryRMOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoRecordException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/RegistryPathStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryTypeUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZookeeperConfigOptions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/using-the-yarn-service-registry.md
* 

[jira] [Commented] (YARN-913) Umbrella: Add a way to register long-lived services in a YARN cluster

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165237#comment-14165237
 ] 

Hudson commented on YARN-913:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1921 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1921/])
YARN-913 service registry: YARN-2652 add hadoop-yarn-registry package under 
hadoop-yarn (stevel: rev 6a326711aa27e84fd4c53937afc5c41a746ec65a)
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/yarn-registry.md
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/JsonSerDeser.java
* hadoop-yarn-project/hadoop-yarn/pom.xml
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/operations/TestRegistryOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/AbstractRegistryTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/InvalidPathnameException.java
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryConstants.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/RegistryOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/AbstractSecureRegistryTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/RegistryOperationsClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/AuthenticationFailedException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureRegistry.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/KerberosConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoChildrenForEphemeralsException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/integration/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/MicroZookeeperService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/client/impl/CuratorEventCatcher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/secure/TestSecureLogins.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/BindFlags.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/Endpoint.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/api/package-info.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/services/MicroZookeeperServiceKeys.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/RegistryOperationsService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/integration/TestRegistryRMOperations.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/exceptions/NoRecordException.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/types/RegistryPathStatus.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryTypeUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/impl/zk/ZookeeperConfigOptions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/registry/using-the-yarn-service-registry.md
* 

[jira] [Commented] (YARN-2598) GHS should show N/A instead of null for the inaccessible information

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165235#comment-14165235
 ] 

Hudson commented on YARN-2598:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1921 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1921/])
YARN-2598 GHS should show N/A instead of null for the inaccessible information  
(Zhijie Shen via mayank) (mayank: rev df3becf0800d24d1fe773651abb16d29f8bc3fdc)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java


 GHS should show N/A instead of null for the inaccessible information
 

 Key: YARN-2598
 URL: https://issues.apache.org/jira/browse/YARN-2598
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: 2.6.0
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2598.1.patch, YARN-2598.2.patch


 When the user doesn't have the access to an application, the app attempt 
 information is not visible to the user. ClientRMService will output N/A, but 
 GHS is showing null, which is not user-friendly.
 {code}
 14/09/24 22:07:20 INFO impl.TimelineClientImpl: Timeline service address: 
 http://nn.example.com:8188/ws/v1/timeline/
 14/09/24 22:07:20 INFO client.RMProxy: Connecting to ResourceManager at 
 nn.example.com/240.0.0.11:8050
 14/09/24 22:07:21 INFO client.AHSProxy: Connecting to Application History 
 server at nn.example.com/240.0.0.11:10200
 Application Report : 
   Application-Id : application_1411586934799_0001
   Application-Name : Sleep job
   Application-Type : MAPREDUCE
   User : hrt_qa
   Queue : default
   Start-Time : 1411586956012
   Finish-Time : 1411586989169
   Progress : 100%
   State : FINISHED
   Final-State : SUCCEEDED
   Tracking-URL : null
   RPC Port : -1
   AM Host : null
   Aggregate Resource Allocation : N/A
   Diagnostics : null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2668) yarn-registry JAR won't link against ZK 3.4.5

2014-10-09 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-2668:


 Summary: yarn-registry JAR won't link against ZK 3.4.5
 Key: YARN-2668
 URL: https://issues.apache.org/jira/browse/YARN-2668
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran


It's been reported that the registry code doesn't link against ZK 3.4.5 as the 
enable/disable SASL client property isn't there, which went in with 
ZOOKEEPER-1657.

pulling in the constant and {{isEnabled()}} check will ensure registry linkage, 
even though the ability for a client to disable SASL auth will be lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2668) yarn-registry JAR won't link against ZK 3.4.5

2014-10-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-2668:
-
Attachment: YARN-2668-001.patch

patch inlines constant and probe

 yarn-registry JAR won't link against ZK 3.4.5
 -

 Key: YARN-2668
 URL: https://issues.apache.org/jira/browse/YARN-2668
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-2668-001.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 It's been reported that the registry code doesn't link against ZK 3.4.5 as 
 the enable/disable SASL client property isn't there, which went in with 
 ZOOKEEPER-1657.
 pulling in the constant and {{isEnabled()}} check will ensure registry 
 linkage, even though the ability for a client to disable SASL auth will be 
 lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2668) yarn-registry JAR won't link against ZK 3.4.5

2014-10-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated YARN-2668:

Hadoop Flags: Reviewed

+1 for the patch, pending Jenkins.  Thanks, Steve.

 yarn-registry JAR won't link against ZK 3.4.5
 -

 Key: YARN-2668
 URL: https://issues.apache.org/jira/browse/YARN-2668
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-2668-001.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 It's been reported that the registry code doesn't link against ZK 3.4.5 as 
 the enable/disable SASL client property isn't there, which went in with 
 ZOOKEEPER-1657.
 pulling in the constant and {{isEnabled()}} check will ensure registry 
 linkage, even though the ability for a client to disable SASL auth will be 
 lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2669) FairScheduler: print out a warning log when users provider a queueName starting with root. in the allocation.xml

2014-10-09 Thread Wei Yan (JIRA)
Wei Yan created YARN-2669:
-

 Summary: FairScheduler: print out a warning log when users 
provider a queueName starting with root. in the allocation.xml
 Key: YARN-2669
 URL: https://issues.apache.org/jira/browse/YARN-2669
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor


For an allocation file like:
{noformat}
allocations
  queue name=root.q1
minResources4096mb,4vcores/minResources
  /queue
/allocations
{noformat}

Users may wish to config minResources for a queue with full path root.q1. 
However, right now, fair scheduler will treat this configureation for the queue 
with full name root.root.q1. We need to print out a warning msg to notify 
users about this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2668) yarn-registry JAR won't link against ZK 3.4.5

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165468#comment-14165468
 ] 

Hadoop QA commented on YARN-2668:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673938/YARN-2668-001.patch
  against trunk revision db71bb5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5345//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5345//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5345//console

This message is automatically generated.

 yarn-registry JAR won't link against ZK 3.4.5
 -

 Key: YARN-2668
 URL: https://issues.apache.org/jira/browse/YARN-2668
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: client
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: YARN-2668-001.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 It's been reported that the registry code doesn't link against ZK 3.4.5 as 
 the enable/disable SASL client property isn't there, which went in with 
 ZOOKEEPER-1657.
 pulling in the constant and {{isEnabled()}} check will ensure registry 
 linkage, even though the ability for a client to disable SASL auth will be 
 lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2629) Make distributed shell use the domain-based timeline ACLs

2014-10-09 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165477#comment-14165477
 ] 

Xuan Gong commented on YARN-2629:
-

+1 LGTM

 Make distributed shell use the domain-based timeline ACLs
 -

 Key: YARN-2629
 URL: https://issues.apache.org/jira/browse/YARN-2629
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2629.1.patch, YARN-2629.2.patch, YARN-2629.3.patch, 
 YARN-2629.4.patch


 For demonstration the usage of this feature (YARN-2102), it's good to make 
 the distributed shell create the domain, and post its timeline entities into 
 this private space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2493) [YARN-796] API changes for users

2014-10-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165482#comment-14165482
 ] 

Vinod Kumar Vavilapalli commented on YARN-2493:
---

+1, looks good. Checking this in.

 [YARN-796] API changes for users
 

 Key: YARN-2493
 URL: https://issues.apache.org/jira/browse/YARN-2493
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2493-20141008.1.patch, YARN-2493.patch, 
 YARN-2493.patch, YARN-2493.patch, YARN-2493.patch, YARN-2493.patch


 This JIRA includes API changes for users of YARN-796, like changes in 
 {{ResourceRequest}}, {{ApplicationSubmissionContext}}, etc. This is a common 
 part of YARN-796.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2493) [YARN-796] API changes for users

2014-10-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165491#comment-14165491
 ] 

Wangda Tan commented on YARN-2493:
--

Thanks [~vinodkv] for review and commit!

Wangda

 [YARN-796] API changes for users
 

 Key: YARN-2493
 URL: https://issues.apache.org/jira/browse/YARN-2493
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.6.0

 Attachments: YARN-2493-20141008.1.patch, YARN-2493.patch, 
 YARN-2493.patch, YARN-2493.patch, YARN-2493.patch, YARN-2493.patch


 This JIRA includes API changes for users of YARN-796, like changes in 
 {{ResourceRequest}}, {{ApplicationSubmissionContext}}, etc. This is a common 
 part of YARN-796.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2493) [YARN-796] API changes for users

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165492#comment-14165492
 ] 

Hudson commented on YARN-2493:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6223/])
YARN-2493. Added user-APIs for using node-labels. Contributed by Wangda Tan. 
(vinodkv: rev 180afa2f86f9854c536c3d4ff7476880e41ac48d)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java


 [YARN-796] API changes for users
 

 Key: YARN-2493
 URL: https://issues.apache.org/jira/browse/YARN-2493
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.6.0

 Attachments: YARN-2493-20141008.1.patch, YARN-2493.patch, 
 YARN-2493.patch, YARN-2493.patch, YARN-2493.patch, YARN-2493.patch


 This JIRA includes API changes for users of YARN-796, like changes in 
 {{ResourceRequest}}, {{ApplicationSubmissionContext}}, etc. This is a common 
 part of YARN-796.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2669) FairScheduler: print out a warning log when users provider a queueName starting with root. in the allocation.xml

2014-10-09 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165496#comment-14165496
 ] 

Sandy Ryza commented on YARN-2669:
--

Might it make more sense to just throw a validation error and crash?  Users 
usually don't look in the RM logs unless something is wrong.

 FairScheduler: print out a warning log when users provider a queueName 
 starting with root. in the allocation.xml
 --

 Key: YARN-2669
 URL: https://issues.apache.org/jira/browse/YARN-2669
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor

 For an allocation file like:
 {noformat}
 allocations
   queue name=root.q1
 minResources4096mb,4vcores/minResources
   /queue
 /allocations
 {noformat}
 Users may wish to config minResources for a queue with full path root.q1. 
 However, right now, fair scheduler will treat this configureation for the 
 queue with full name root.root.q1. We need to print out a warning msg to 
 notify users about this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2544) [YARN-796] Common server side PB changes (not include user API PB changes)

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2544:
-
Attachment: YARN-2544-20141009.1.patch

 [YARN-796] Common server side PB changes (not include user API PB changes)
 --

 Key: YARN-2544
 URL: https://issues.apache.org/jira/browse/YARN-2544
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2544-20141008.1.patch, YARN-2544-20141008.2.patch, 
 YARN-2544-20141009.1.patch, YARN-2544.patch, YARN-2544.patch, YARN-2544.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2312) Marking ContainerId#getId as deprecated

2014-10-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165530#comment-14165530
 ] 

Jian He commented on YARN-2312:
---

took look at the patch again, seems we should do parseLong for the following 
code also.
{code}
int jvmIdInt = Integer.parseInt(args[3]);
JVMId jvmId = new JVMId(firstTaskid.getJobID(),
firstTaskid.getTaskType() == TaskType.MAP, jvmIdInt);
{code}


 Marking ContainerId#getId as deprecated
 ---

 Key: YARN-2312
 URL: https://issues.apache.org/jira/browse/YARN-2312
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2312-wip.patch, YARN-2312.1.patch, 
 YARN-2312.2-2.patch, YARN-2312.2-3.patch, YARN-2312.2.patch, 
 YARN-2312.4.patch, YARN-2312.5.patch, YARN-2312.6.patch


 {{ContainerId#getId}} will only return partial value of containerId, only 
 sequence number of container id without epoch, after YARN-2229. We should 
 mark {{ContainerId#getId}} as deprecated and use 
 {{ContainerId#getContainerId}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2669) FairScheduler: print out a warning log when users provider a queueName starting with root. in the allocation.xml

2014-10-09 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165534#comment-14165534
 ] 

Wei Yan commented on YARN-2669:
---

bq. Might it make more sense to just throw a validation error and crash?
So in that case, we don't accept a queue named like root.root?

 FairScheduler: print out a warning log when users provider a queueName 
 starting with root. in the allocation.xml
 --

 Key: YARN-2669
 URL: https://issues.apache.org/jira/browse/YARN-2669
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor

 For an allocation file like:
 {noformat}
 allocations
   queue name=root.q1
 minResources4096mb,4vcores/minResources
   /queue
 /allocations
 {noformat}
 Users may wish to config minResources for a queue with full path root.q1. 
 However, right now, fair scheduler will treat this configureation for the 
 queue with full name root.root.q1. We need to print out a warning msg to 
 notify users about this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2574) Add support for FairScheduler to the ReservationSystem

2014-10-09 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-2574:
-
Assignee: Anubhav Dhoot

 Add support for FairScheduler to the ReservationSystem
 --

 Key: YARN-2574
 URL: https://issues.apache.org/jira/browse/YARN-2574
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: fairscheduler
Reporter: Subru Krishnan
Assignee: Anubhav Dhoot

 YARN-1051 introduces the ReservationSystem and the current implementation is 
 based on CapacityScheduler. This JIRA proposes adding support for 
 FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2544) [YARN-796] Common server side PB changes (not include user API PB changes)

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165612#comment-14165612
 ] 

Hadoop QA commented on YARN-2544:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12673952/YARN-2544-20141009.1.patch
  against trunk revision 180afa2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5346//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5346//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5346//console

This message is automatically generated.

 [YARN-796] Common server side PB changes (not include user API PB changes)
 --

 Key: YARN-2544
 URL: https://issues.apache.org/jira/browse/YARN-2544
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2544-20141008.1.patch, YARN-2544-20141008.2.patch, 
 YARN-2544-20141009.1.patch, YARN-2544.patch, YARN-2544.patch, YARN-2544.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2669) FairScheduler: print out a warning log when users provider a queueName starting with root. in the allocation.xml

2014-10-09 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165642#comment-14165642
 ] 

Sandy Ryza commented on YARN-2669:
--

We shouldn't allow configured queue names to have periods in them.  I believe 
we already don't accept queues named root, but if we do, we shouldn't.

 FairScheduler: print out a warning log when users provider a queueName 
 starting with root. in the allocation.xml
 --

 Key: YARN-2669
 URL: https://issues.apache.org/jira/browse/YARN-2669
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor

 For an allocation file like:
 {noformat}
 allocations
   queue name=root.q1
 minResources4096mb,4vcores/minResources
   /queue
 /allocations
 {noformat}
 Users may wish to config minResources for a queue with full path root.q1. 
 However, right now, fair scheduler will treat this configureation for the 
 queue with full name root.root.q1. We need to print out a warning msg to 
 notify users about this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2629) Make distributed shell use the domain-based timeline ACLs

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165664#comment-14165664
 ] 

Hudson commented on YARN-2629:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6225 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6225/])
YARN-2629. Made the distributed shell use the domain-based timeline ACLs. 
Contributed by Zhijie Shen. (zjshen: rev 
1d4612f5ad9678c952b416e798dccd20c88f96ef)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/DSConstants.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java


 Make distributed shell use the domain-based timeline ACLs
 -

 Key: YARN-2629
 URL: https://issues.apache.org/jira/browse/YARN-2629
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Fix For: 2.6.0

 Attachments: YARN-2629.1.patch, YARN-2629.2.patch, YARN-2629.3.patch, 
 YARN-2629.4.patch


 For demonstration the usage of this feature (YARN-2102), it's good to make 
 the distributed shell create the domain, and post its timeline entities into 
 this private space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2598) GHS should show N/A instead of null for the inaccessible information

2014-10-09 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165715#comment-14165715
 ] 

Mayank Bansal commented on YARN-2598:
-

Committed to branch 2, 2,6 and trunk
Thanks [~zjshen]

Thanks,
Mayank

 GHS should show N/A instead of null for the inaccessible information
 

 Key: YARN-2598
 URL: https://issues.apache.org/jira/browse/YARN-2598
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: 2.6.0
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2598.1.patch, YARN-2598.2.patch


 When the user doesn't have the access to an application, the app attempt 
 information is not visible to the user. ClientRMService will output N/A, but 
 GHS is showing null, which is not user-friendly.
 {code}
 14/09/24 22:07:20 INFO impl.TimelineClientImpl: Timeline service address: 
 http://nn.example.com:8188/ws/v1/timeline/
 14/09/24 22:07:20 INFO client.RMProxy: Connecting to ResourceManager at 
 nn.example.com/240.0.0.11:8050
 14/09/24 22:07:21 INFO client.AHSProxy: Connecting to Application History 
 server at nn.example.com/240.0.0.11:10200
 Application Report : 
   Application-Id : application_1411586934799_0001
   Application-Name : Sleep job
   Application-Type : MAPREDUCE
   User : hrt_qa
   Queue : default
   Start-Time : 1411586956012
   Finish-Time : 1411586989169
   Progress : 100%
   State : FINISHED
   Final-State : SUCCEEDED
   Tracking-URL : null
   RPC Port : -1
   AM Host : null
   Aggregate Resource Allocation : N/A
   Diagnostics : null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2598) GHS should show N/A instead of null for the inaccessible information

2014-10-09 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-2598:
--
Hadoop Flags: Reviewed

 GHS should show N/A instead of null for the inaccessible information
 

 Key: YARN-2598
 URL: https://issues.apache.org/jira/browse/YARN-2598
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: 2.6.0
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2598.1.patch, YARN-2598.2.patch


 When the user doesn't have the access to an application, the app attempt 
 information is not visible to the user. ClientRMService will output N/A, but 
 GHS is showing null, which is not user-friendly.
 {code}
 14/09/24 22:07:20 INFO impl.TimelineClientImpl: Timeline service address: 
 http://nn.example.com:8188/ws/v1/timeline/
 14/09/24 22:07:20 INFO client.RMProxy: Connecting to ResourceManager at 
 nn.example.com/240.0.0.11:8050
 14/09/24 22:07:21 INFO client.AHSProxy: Connecting to Application History 
 server at nn.example.com/240.0.0.11:10200
 Application Report : 
   Application-Id : application_1411586934799_0001
   Application-Name : Sleep job
   Application-Type : MAPREDUCE
   User : hrt_qa
   Queue : default
   Start-Time : 1411586956012
   Finish-Time : 1411586989169
   Progress : 100%
   State : FINISHED
   Final-State : SUCCEEDED
   Tracking-URL : null
   RPC Port : -1
   AM Host : null
   Aggregate Resource Allocation : N/A
   Diagnostics : null
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2669) FairScheduler: print out a warning log when users provider a queueName starting with root. in the allocation.xml

2014-10-09 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165730#comment-14165730
 ] 

Wei Yan commented on YARN-2669:
---

right now we accept periods in the alloc file. will fix that.

 FairScheduler: print out a warning log when users provider a queueName 
 starting with root. in the allocation.xml
 --

 Key: YARN-2669
 URL: https://issues.apache.org/jira/browse/YARN-2669
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor

 For an allocation file like:
 {noformat}
 allocations
   queue name=root.q1
 minResources4096mb,4vcores/minResources
   /queue
 /allocations
 {noformat}
 Users may wish to config minResources for a queue with full path root.q1. 
 However, right now, fair scheduler will treat this configureation for the 
 queue with full name root.root.q1. We need to print out a warning msg to 
 notify users about this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2320) Removing old application history store after we store the history data to timeline store

2014-10-09 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165795#comment-14165795
 ] 

Mayank Bansal commented on YARN-2320:
-

Thanks [~zjshen] for the patch
overall looks ok
1) couple of points I think Attempt and container too should have N/A instead 
of null. If you wanted to do it in seprate jira thats fine too.
2) latest patch needs rebasing 
3) What testing you have done on this patch?

Once I will have rebased patch will run tests.

Thanks,
Mayank

 Removing old application history store after we store the history data to 
 timeline store
 

 Key: YARN-2320
 URL: https://issues.apache.org/jira/browse/YARN-2320
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Zhijie Shen
Assignee: Zhijie Shen
 Attachments: YARN-2320.1.patch, YARN-2320.2.patch, YARN-2320.3.patch


 After YARN-2033, we should deprecate application history store set. There's 
 no need to maintain two sets of store interfaces. In addition, we should 
 conclude the outstanding jira's under YARN-321 about the application history 
 store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2670) Adding feedback capability to capacity scheduler from external systems

2014-10-09 Thread Mayank Bansal (JIRA)
Mayank Bansal created YARN-2670:
---

 Summary: Adding feedback capability to capacity scheduler from 
external systems
 Key: YARN-2670
 URL: https://issues.apache.org/jira/browse/YARN-2670
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Mayank Bansal
Assignee: Mayank Bansal


The sheer growth in data volume and Hadoop cluster size make it a significant 
challenge to diagnose and locate problems in a production-level cluster 
environment efficiently and within a short period of time. Often times, the 
distributed monitoring systems are not capable of detecting a problem well in 
advance when a large-scale Hadoop cluster starts to deteriorate in performance 
or becomes unavailable. Thus, incoming workloads, scheduled between the time 
when cluster starts to deteriorate and the time when the problem is identified, 
suffer from longer execution times. As a result, both reliability and 
throughput of the cluster reduce significantly. we address this problem by 
proposing a system called Astro, which consists of a predictive model and an 
extension to the Capacity scheduler. The predictive model in Astro takes into 
account a rich set of cluster behavioral information that are collected by 
monitoring processes and model them using machine learning algorithms to 
predict future behavior of the cluster. The Astro predictive model detects 
anomalies in the cluster and also identifies a ranked set of metrics that have 
contributed the most towards the problem. The Astro scheduler uses the 
prediction outcome and the list of metrics to decide whether it needs to move 
and reduce workloads from the problematic cluster nodes or to prevent 
additional workload allocations to them, in order to improve both throughput 
and reliability of the cluster.

This JIRA is only for adding feedback capabilities to Capacity Scheduler which 
can take feedback from external systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2582) Log related CLI and Web UI changes for Aggregated Logs in LRS

2014-10-09 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-2582:

Attachment: YARN-2582.2.patch

Addressed all the comments

 Log related CLI and Web UI changes for Aggregated Logs in LRS
 -

 Key: YARN-2582
 URL: https://issues.apache.org/jira/browse/YARN-2582
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-2582.1.patch, YARN-2582.2.patch


 After YARN-2468, we have change the log layout to support log aggregation for 
 Long Running Service. Log CLI and related Web UI should be modified 
 accordingly.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2651) Spin off the LogRollingInterval from LogAggregationContext

2014-10-09 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-2651:

Attachment: YARN-2651.1.patch

 Spin off the LogRollingInterval from LogAggregationContext
 --

 Key: YARN-2651
 URL: https://issues.apache.org/jira/browse/YARN-2651
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-2651.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2494) [YARN-796] Node label manager API and storage implementations

2014-10-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165887#comment-14165887
 ] 

Vinod Kumar Vavilapalli commented on YARN-2494:
---

This is very close. Final set of comments:
 - The pattern {quote}^[0-9a-zA-Z][0-9a-zA-z-_]* : 
^[0-9a-zA-Z][0-9a-zA-z-_]*{quote} has a bug: small z instead of big Z
 - LabelType, HostType, QueueType, NodeType - Drop the Type suffix
 - Move QueueType into DynamicNodeLabelsManager.
 - DynamicNodeLabelManager 
-- updateRunningNodes: No need to call getNMInNodeSet twice
-- updateRunningNodes - updatingResourceMappings
-- QueueType.labels - accessibleLabels
-- Move acls also into the RMNodeLabelsManager
 - NodeLabelsManager - CommonNodeLabelsManager
 - DynamicNodeLabelManager - RMNodeLabelManager. Similarly 
TestDynamicNodeLabelsManager.
 - Let’s rename events like AddToClusterNodeLabelsEvent and store operations 
similar to RMStateStore. For e.g. storeNewClusterNodeLables

{quote}
  // if here, nm is still null, the only reason is, registered nodeId has
  // port = 0. This will only happen in unit test. Some tests registered NM
  // with port = 0. Just print a log and skip following step
  if (null == nm) {
LOG.warn(Register nodeId is illegal, nodeId= + nodeId.toString());
return;
  }
{quote}
That doesn't make sense. Any activated node will have a non-zero port.

 [YARN-796] Node label manager API and storage implementations
 -

 Key: YARN-2494
 URL: https://issues.apache.org/jira/browse/YARN-2494
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, 
 YARN-2494.patch


 This JIRA includes APIs and storage implementations of node label manager,
 NodeLabelManager is an abstract class used to manage labels of nodes in the 
 cluster, it has APIs to query/modify
 - Nodes according to given label
 - Labels according to given hostname
 - Add/remove labels
 - Set labels of nodes in the cluster
 - Persist/recover changes of labels/labels-on-nodes to/from storage
 And it has two implementations to store modifications
 - Memory based storage: It will not persist changes, so all labels will be 
 lost when RM restart
 - FileSystem based storage: It will persist/recover to/from FileSystem (like 
 HDFS), and all labels and labels-on-nodes will be recovered upon RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2544) [YARN-796] Common server side PB changes (not include user API PB changes)

2014-10-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165893#comment-14165893
 ] 

Vinod Kumar Vavilapalli commented on YARN-2544:
---

+1, this looks good. Checking this in..

 [YARN-796] Common server side PB changes (not include user API PB changes)
 --

 Key: YARN-2544
 URL: https://issues.apache.org/jira/browse/YARN-2544
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2544-20141008.1.patch, YARN-2544-20141008.2.patch, 
 YARN-2544-20141009.1.patch, YARN-2544.patch, YARN-2544.patch, YARN-2544.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2494) [YARN-796] Node label manager API and storage implementations

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2494:
-
Attachment: YARN-2494.20141009-1.patch

[~vinodkv], thanks for your comments! All comments make sense to me.
Attached a new patch.

Wangda

 [YARN-796] Node label manager API and storage implementations
 -

 Key: YARN-2494
 URL: https://issues.apache.org/jira/browse/YARN-2494
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2494.20141009-1.patch, YARN-2494.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch


 This JIRA includes APIs and storage implementations of node label manager,
 NodeLabelManager is an abstract class used to manage labels of nodes in the 
 cluster, it has APIs to query/modify
 - Nodes according to given label
 - Labels according to given hostname
 - Add/remove labels
 - Set labels of nodes in the cluster
 - Persist/recover changes of labels/labels-on-nodes to/from storage
 And it has two implementations to store modifications
 - Memory based storage: It will not persist changes, so all labels will be 
 lost when RM restart
 - FileSystem based storage: It will persist/recover to/from FileSystem (like 
 HDFS), and all labels and labels-on-nodes will be recovered upon RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2500) [YARN-796] Miscellaneous changes in ResourceManager to support labels

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2500:
-
Attachment: YARN-2500-20141009-1.patch

 [YARN-796] Miscellaneous changes in ResourceManager to support labels
 -

 Key: YARN-2500
 URL: https://issues.apache.org/jira/browse/YARN-2500
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2500-20141009-1.patch, YARN-2500.patch, 
 YARN-2500.patch, YARN-2500.patch, YARN-2500.patch, YARN-2500.patch


 This patches contains changes in ResourceManager to support labels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2496) [YARN-796] Changes for capacity scheduler to support allocate resource respect labels

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2496:
-
Attachment: YARN-2496-20141009-1.patch

 [YARN-796] Changes for capacity scheduler to support allocate resource 
 respect labels
 -

 Key: YARN-2496
 URL: https://issues.apache.org/jira/browse/YARN-2496
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2496-20141009-1.patch, YARN-2496.patch, 
 YARN-2496.patch, YARN-2496.patch, YARN-2496.patch, YARN-2496.patch, 
 YARN-2496.patch, YARN-2496.patch, YARN-2496.patch


 This JIRA Includes:
 - Add/parse labels option to {{capacity-scheduler.xml}} similar to other 
 options of queue like capacity/maximum-capacity, etc.
 - Include a default-label-expression option in queue config, if an app 
 doesn't specify label-expression, default-label-expression of queue will be 
 used.
 - Check if labels can be accessed by the queue when submit an app with 
 labels-expression to queue or update ResourceRequest with label-expression
 - Check labels on NM when trying to allocate ResourceRequest on the NM with 
 label-expression
 - Respect  labels when calculate headroom/user-limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2582) Log related CLI and Web UI changes for Aggregated Logs in LRS

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165920#comment-14165920
 ] 

Hadoop QA commented on YARN-2582:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674019/YARN-2582.2.patch
  against trunk revision 8d94114.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5347//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5347//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5347//console

This message is automatically generated.

 Log related CLI and Web UI changes for Aggregated Logs in LRS
 -

 Key: YARN-2582
 URL: https://issues.apache.org/jira/browse/YARN-2582
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-2582.1.patch, YARN-2582.2.patch


 After YARN-2468, we have change the log layout to support log aggregation for 
 Long Running Service. Log CLI and related Web UI should be modified 
 accordingly.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2544) [YARN-796] Common server side PB changes (not include user API PB changes)

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165923#comment-14165923
 ] 

Hudson commented on YARN-2544:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6227 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6227/])
YARN-2544. Added admin-API objects for using node-labels. Contributed by Wangda 
Tan. (vinodkv: rev 596702a02501e9cb09aabced168027189eaf02ba)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/AddToClusterNodeLabelsRequestPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RemoveFromClusterNodeLabelsRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/GetNodesToLabelsRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/GetNodesToLabelsResponsePBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/AddToClusterNodeLabelsResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/AddToClusterNodeLabelsResponsePBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RemoveFromClusterNodeLabelsRequestPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/QueueInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReplaceLabelsOnNodeRequestPBImpl.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/ReplaceLabelsOnNodeResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/GetNodesToLabelsResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/ProtocolHATestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/GetClusterNodeLabelsRequestPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/GetClusterNodeLabelsResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/GetClusterNodeLabelsResponsePBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RemoveFromClusterNodeLabelsResponsePBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/GetClusterNodeLabelsRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/ReplaceLabelsOnNodeRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/GetNodesToLabelsRequestPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/AddToClusterNodeLabelsRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RemoveFromClusterNodeLabelsResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/server/yarn_server_resourcemanager_service_protos.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/QueueInfoPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReplaceLabelsOnNodeResponsePBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto


 [YARN-796] Common server side PB changes (not include user API PB changes)
 --

 Key: YARN-2544
 URL: https://issues.apache.org/jira/browse/YARN-2544
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: api, client, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.6.0

 Attachments: YARN-2544-20141008.1.patch, YARN-2544-20141008.2.patch, 
 YARN-2544-20141009.1.patch, YARN-2544.patch, YARN-2544.patch, 

[jira] [Created] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Zhijie Shen (JIRA)
Zhijie Shen created YARN-2671:
-

 Summary: ApplicationSubmissionContext change breaks the existing 
app submission
 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen


After YARN-2493, app submission goes wrong with the following exception:
{code}
2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
/ws/v1/cluster/apps] webapp.GenericExceptionHandler 
(GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
at 
org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
{code}

This is because resource is putting into ResourceRequest of 
ApplicationSubmissionContext, but not directly into 
ApplicationSubmissionContext, therefore the sanity check won't get resource 
object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2582) Log related CLI and Web UI changes for Aggregated Logs in LRS

2014-10-09 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165926#comment-14165926
 ] 

Xuan Gong commented on YARN-2582:
-

-1 release audit is unrelated

 Log related CLI and Web UI changes for Aggregated Logs in LRS
 -

 Key: YARN-2582
 URL: https://issues.apache.org/jira/browse/YARN-2582
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-2582.1.patch, YARN-2582.2.patch


 After YARN-2468, we have change the log layout to support log aggregation for 
 Long Running Service. Log CLI and related Web UI should be modified 
 accordingly.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-2671:
--
Priority: Blocker  (was: Major)

 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Priority: Blocker

 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-2671:


Assignee: Wangda Tan

 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Assignee: Wangda Tan
Priority: Blocker

 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165935#comment-14165935
 ] 

Wangda Tan commented on YARN-2671:
--

Thanks [~zjshen] for reporting this issue, looking at this ..

 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Assignee: Wangda Tan
Priority: Blocker

 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2672) Improve Gridmix (synthetic generator + reservation support)

2014-10-09 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-2672:
--

 Summary: Improve Gridmix (synthetic generator + reservation 
support)
 Key: YARN-2672
 URL: https://issues.apache.org/jira/browse/YARN-2672
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino
Assignee: Carlo Curino


This JIRA proposes an enhancement of Gridmix that contains:
1) a synthetic generator to produce load without the need of a trace, but based 
on distributions
2) include negotiation of reservations (to test YARN-1051). 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2496) [YARN-796] Changes for capacity scheduler to support allocate resource respect labels

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165951#comment-14165951
 ] 

Hadoop QA commented on YARN-2496:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674034/YARN-2496-20141009-1.patch
  against trunk revision 596702a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5348//console

This message is automatically generated.

 [YARN-796] Changes for capacity scheduler to support allocate resource 
 respect labels
 -

 Key: YARN-2496
 URL: https://issues.apache.org/jira/browse/YARN-2496
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2496-20141009-1.patch, YARN-2496.patch, 
 YARN-2496.patch, YARN-2496.patch, YARN-2496.patch, YARN-2496.patch, 
 YARN-2496.patch, YARN-2496.patch, YARN-2496.patch


 This JIRA Includes:
 - Add/parse labels option to {{capacity-scheduler.xml}} similar to other 
 options of queue like capacity/maximum-capacity, etc.
 - Include a default-label-expression option in queue config, if an app 
 doesn't specify label-expression, default-label-expression of queue will be 
 used.
 - Check if labels can be accessed by the queue when submit an app with 
 labels-expression to queue or update ResourceRequest with label-expression
 - Check labels on NM when trying to allocate ResourceRequest on the NM with 
 label-expression
 - Respect  labels when calculate headroom/user-limit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2671:
-
Attachment: YARN-2617-20141009.1.patch

Attached a fix for this issue.

 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2617-20141009.1.patch


 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2664) Improve RM webapp to expose info about reservations.

2014-10-09 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-2664:
---
Attachment: PlannerPage_screenshot.pdf

 Improve RM webapp to expose info about reservations.
 

 Key: YARN-2664
 URL: https://issues.apache.org/jira/browse/YARN-2664
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Carlo Curino
 Attachments: PlannerPage_screenshot.pdf, YARN-2664.patch


 YARN-1051 provides a new functionality in the RM to ask for reservation on 
 resources. Exposing this through the webapp GUI is important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2664) Improve RM webapp to expose info about reservations.

2014-10-09 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166000#comment-14166000
 ] 

Carlo Curino commented on YARN-2664:


Attached screenshot of first-cut visualization of Plan.

 Improve RM webapp to expose info about reservations.
 

 Key: YARN-2664
 URL: https://issues.apache.org/jira/browse/YARN-2664
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Carlo Curino
 Attachments: PlannerPage_screenshot.pdf, YARN-2664.patch


 YARN-1051 provides a new functionality in the RM to ask for reservation on 
 resources. Exposing this through the webapp GUI is important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2500) [YARN-796] Miscellaneous changes in ResourceManager to support labels

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2500:
-
Attachment: YARN-2500-20141009-2.patch

 [YARN-796] Miscellaneous changes in ResourceManager to support labels
 -

 Key: YARN-2500
 URL: https://issues.apache.org/jira/browse/YARN-2500
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2500-20141009-1.patch, YARN-2500-20141009-2.patch, 
 YARN-2500.patch, YARN-2500.patch, YARN-2500.patch, YARN-2500.patch, 
 YARN-2500.patch


 This patches contains changes in ResourceManager to support labels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2501) [YARN-796] Changes in AMRMClient to support labels

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2501:
-
Attachment: YARN-2501-20141009.1.patch

[~vinodkv], thanks for your comments! Attached a patch address all comments .

 [YARN-796] Changes in AMRMClient to support labels
 --

 Key: YARN-2501
 URL: https://issues.apache.org/jira/browse/YARN-2501
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2501-20141009.1.patch, YARN-2501.patch


 Changes in AMRMClient to support labels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2672) Improve Gridmix (synthetic generator + reservation support)

2014-10-09 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-2672:
---
Attachment: YARN-2672.patch

 Improve Gridmix (synthetic generator + reservation support)
 ---

 Key: YARN-2672
 URL: https://issues.apache.org/jira/browse/YARN-2672
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler, fairscheduler, resourcemanager
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: YARN-2672.patch


 This JIRA proposes an enhancement of Gridmix that contains:
 1) a synthetic generator to produce load without the need of a trace, but 
 based on distributions
 2) include negotiation of reservations (to test YARN-1051). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2502) [YARN-796] Changes in distributed shell to support specify labels

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2502:
-
Attachment: YARN-2502-20141009.1.patch

Attached a patch address comments, except the end to end test case cannot be 
passed without YARN-2496/YARN-2500 get committed. 

 [YARN-796] Changes in distributed shell to support specify labels
 -

 Key: YARN-2502
 URL: https://issues.apache.org/jira/browse/YARN-2502
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2502-20141009.1.patch, YARN-2502.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166028#comment-14166028
 ] 

Hadoop QA commented on YARN-2671:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674046/YARN-2617-20141009.1.patch
  against trunk revision 596702a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5349//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5349//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5349//console

This message is automatically generated.

 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2617-20141009.1.patch


 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166030#comment-14166030
 ] 

Wangda Tan commented on YARN-2671:
--

The audit warning is not caused by this patch and tracked by YARN-2665/

 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2617-20141009.1.patch


 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2672) Improve Gridmix (synthetic generator + reservation support)

2014-10-09 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166032#comment-14166032
 ] 

Carlo Curino commented on YARN-2672:


The patch contains:

 *  a simple synthetic generator, that allows to control avg and stdev for many 
common parameters (#maps, #reducers, map-time, red-time, IOs for 
in/out/shuffle, deadlines and duration of jobs for reservation).  It is easy to 
define (see syn.json) different workloads, and job classes with various 
properties (size/frequency). 
Not very tested is also the possibility to generate jobs at different rates (by 
controlling the weighting of subsequent time ranges).  This is generally 
useful.

 * Extensions to experiment with reservations, where we can specify with what 
probability a job class would be run with reservation, and submit the 
corresponding ReservationRequeust via YARN-1051, and upon acceptance launch the 
job in it. 

The patch is *rough*, but as few people are starting to experiment with 
YARN-1051 we thought it was important to toss the code out and let folks 
experiments/improve/provide feedback. 

 Improve Gridmix (synthetic generator + reservation support)
 ---

 Key: YARN-2672
 URL: https://issues.apache.org/jira/browse/YARN-2672
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler, fairscheduler, resourcemanager
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: YARN-2672.patch


 This JIRA proposes an enhancement of Gridmix that contains:
 1) a synthetic generator to produce load without the need of a trace, but 
 based on distributions
 2) include negotiation of reservations (to test YARN-1051). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2180) In-memory backing store for cache manager

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166048#comment-14166048
 ] 

Hudson commented on YARN-2180:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6229 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6229/])
YARN-2180. [YARN-1492] In-memory backing store for cache manager. (Chris Trezzo 
via kasha) (kasha: rev 4f426fe2232ed90d8fdf8619fbdeae28d788b5c8)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/SharedCacheResource.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/SCMStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/InMemorySCMStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/sharedcache/SharedCacheStructureUtil.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/test/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/TestInMemorySCMStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SharedCacheManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/sharedcache/SharedCacheUtil.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/SharedCacheResourceReference.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


 In-memory backing store for cache manager
 -

 Key: YARN-2180
 URL: https://issues.apache.org/jira/browse/YARN-2180
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Chris Trezzo
Assignee: Chris Trezzo
 Fix For: 2.7.0

 Attachments: YARN-2180-trunk-v1.patch, YARN-2180-trunk-v2.patch, 
 YARN-2180-trunk-v3.patch, YARN-2180-trunk-v4.patch, YARN-2180-trunk-v5.patch, 
 YARN-2180-trunk-v6.patch, YARN-2180-trunk-v7.patch


 Implement an in-memory backing store for the cache manager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1492) truly shared cache for jars (jobjar/libjar)

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166047#comment-14166047
 ] 

Hudson commented on YARN-1492:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6229 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6229/])
YARN-2180. [YARN-1492] In-memory backing store for cache manager. (Chris Trezzo 
via kasha) (kasha: rev 4f426fe2232ed90d8fdf8619fbdeae28d788b5c8)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/SharedCacheResource.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/SCMStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/InMemorySCMStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/sharedcache/SharedCacheStructureUtil.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/test/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/TestInMemorySCMStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/SharedCacheManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/sharedcache/SharedCacheUtil.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-sharedcachemanager/src/main/java/org/apache/hadoop/yarn/server/sharedcachemanager/store/SharedCacheResourceReference.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


 truly shared cache for jars (jobjar/libjar)
 ---

 Key: YARN-1492
 URL: https://issues.apache.org/jira/browse/YARN-1492
 Project: Hadoop YARN
  Issue Type: New Feature
Affects Versions: 2.0.4-alpha
Reporter: Sangjin Lee
Assignee: Chris Trezzo
Priority: Critical
 Attachments: YARN-1492-all-trunk-v1.patch, 
 YARN-1492-all-trunk-v2.patch, YARN-1492-all-trunk-v3.patch, 
 YARN-1492-all-trunk-v4.patch, YARN-1492-all-trunk-v5.patch, 
 shared_cache_design.pdf, shared_cache_design_v2.pdf, 
 shared_cache_design_v3.pdf, shared_cache_design_v4.pdf, 
 shared_cache_design_v5.pdf, shared_cache_design_v6.pdf


 Currently there is the distributed cache that enables you to cache jars and 
 files so that attempts from the same job can reuse them. However, sharing is 
 limited with the distributed cache because it is normally on a per-job basis. 
 On a large cluster, sometimes copying of jobjars and libjars becomes so 
 prevalent that it consumes a large portion of the network bandwidth, not to 
 speak of defeating the purpose of bringing compute to where data is. This 
 is wasteful because in most cases code doesn't change much across many jobs.
 I'd like to propose and discuss feasibility of introducing a truly shared 
 cache so that multiple jobs from multiple users can share and cache jars. 
 This JIRA is to open the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2672) Improve Gridmix (synthetic generator + reservation support)

2014-10-09 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166055#comment-14166055
 ] 

Carlo Curino commented on YARN-2672:


Quick how to use:

I usually run it with something like this:

{code:title=gridmix.sh|borderStyle=solid}
#!/bin/bash

TRACE=${1:-syn.json}
LOCATION=${2:-/user/hadoop/gridmix100g}
. env.sh
date
hadoop fs -rm -r $LOCATION/gridmix
hadoop fs -rm /user/hadoop/$TRACE
hadoop fs -put $TRACE /user/hadoop
export 
HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_YARN_HOME/share/hadoop/tools/lib/*:/home/hadoop/commons-math3-3.0.jar

echo launching gridmix

hadoop jar 
$HADOOP_COMMON_HOME/share/hadoop/tools/lib/hadoop-gridmix-3.0.0-SNAPSHOT.jar 
-libjars 
$HADOOP_COMMON_HOME/share/hadoop/tools/lib/hadoop-rumen-3.0.0-SNAPSHOT.jar \
  -Dgridmix.job-producer.is.synthetic=true \
  -Dgridmix.job-submission.policy=REPLAY \
  -Dgridmix.job.type=LOADJOB \
  -Dgridmix.job-submission.default-queue=default \
  -Dgridmix.sleep.fake-locations=3 \
  -Dgridmix.compression-emulation.enable=false \
  -Dgridmix.job.seq=1 \
  -Dgridmix.client.submit.threads=20 \
  -Dgridmix.client.pending.queue.depth=10 \
  -Dmapreduce.map.java.opts=-Xmx2000m \
  -Dmapreduce.reduce.java.opts=-Xmx4000m \
   $LOCATION /user/hadoop/$TRACE

{code}

The syn.json looks something like this:

{code:title=syn.json|borderStyle=solid}
{
  description : tiny jobs workload,
  num_jobs : 1000,
  rand_seed : 2,
  workloads : [
{
  workload_name : tiny-test,
  workload_weight: 0.5,
  description : Sort jobs,
  queue_name : dedicated,
  job_classes : [
{
  class_name : class_1,
  class_weight : 1.0,

  mtasks_avg : 5,
  mtasks_stddev : 1,
  rtasks_avg : 5,
  rtasks_stddev : 1,

  in_avg : 1048500,
  in_stddev : 17466,
  shuffle_avg : 104085000,
  shuffle_stddev : 162666,
  out_avg : 10485700,
  out_stddev : 1876000,
  dur_avg : 600,
  dur_stddev : 60,

  mtime_avg : 3,
  mtime_stddev : 60,
  rtime_avg : 3,
  rtime_stddev : 6,

  map_max_memory_avg : 1024,
  map_max_memory_stddev : 0.001,
  reduce_max_memory_avg : 1024,
  reduce_max_memory_stddev : 0.001,
  bytes_per_map_record : 512,
  bytes_per_shuffle_record : 512,
  bytes_per_reduce_record : 1024,
  
  chance_of_reservation : 1.0,
  deadline_factor_avg : 10.0,
  deadline_factor_stddev : 0.001,
  gang_size : 1
}
   ],
  time_distribution : [
{ time : 1, jobs : 100 },
{ time : 3600, jobs : 0 }
 ]
}
 ]  
}   
{code}

The distributions of each parameter are *Normal* with average X_avg and 
standard deviation X_stddev.

 Improve Gridmix (synthetic generator + reservation support)
 ---

 Key: YARN-2672
 URL: https://issues.apache.org/jira/browse/YARN-2672
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler, fairscheduler, resourcemanager
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: YARN-2672.patch


 This JIRA proposes an enhancement of Gridmix that contains:
 1) a synthetic generator to produce load without the need of a trace, but 
 based on distributions
 2) include negotiation of reservations (to test YARN-1051). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2056) Disable preemption at Queue level

2014-10-09 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166073#comment-14166073
 ] 

Eric Payne commented on YARN-2056:
--

Thank you very much for your suggestions and review comments, [~leftnoteasy].

I am still working through your suggestion for a new algorithm.

In the meantime, I have analyzed the use case you provided, and I'm pretty sure 
the current patch does what is expected.

{code}
total = 100
qA: used = 10, guaranteed = 10, pending = 100
qB: used = 25, guaranteed = 10, pending = 100 (non-preemptable)
qC: used = 0, guaranteed = 80, pending = 0

1. Prior to the first round, qB is removed, from qAlloc and those resources are 
removed from unassigned.
unassigned = 75
2. During the first round, qA is normalized to 0.11, and so is offered 8, and 
takes 8.
qA.idealassigned = 8
qC is removed from qAlloc
unassigned = 67
3. During the second round, qA is normalized to 1.0 and offered 67. However, 
since offer() is also considering qB, offer() determines that qA needs 17 to 
get to the same level of over-capacity as qA, so offer() only selects 17 of the 
67. Since qA didn't take the whole offer and also qA's requests are not 
satisfied, qA was not removed from qAlloc.
qA.idealAssigned = 25
unassigned = 50
wQdone = 17
4. Since qA is now the same amount of over-capacity as qB, qB is added back.
5. In the third round, both qA and qB are normalized at 0.5 and are both 
offered 25, which completes the assignments:
qA.idealAssigned = 50
qB.idealAssigned = 50
{code}

 Disable preemption at Queue level
 -

 Key: YARN-2056
 URL: https://issues.apache.org/jira/browse/YARN-2056
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Mayank Bansal
Assignee: Eric Payne
 Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, 
 YARN-2056.201408310117.txt, YARN-2056.201409022208.txt, 
 YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, 
 YARN-2056.201409232329.txt, YARN-2056.201409242210.txt


 We need to be able to disable preemption at individual queue level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2502) [YARN-796] Changes in distributed shell to support specify labels

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166086#comment-14166086
 ] 

Hadoop QA commented on YARN-2502:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674062/YARN-2502-20141009.1.patch
  against trunk revision d8d628d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5351//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5351//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5351//console

This message is automatically generated.

 [YARN-796] Changes in distributed shell to support specify labels
 -

 Key: YARN-2502
 URL: https://issues.apache.org/jira/browse/YARN-2502
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2502-20141009.1.patch, YARN-2502.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2501) [YARN-796] Changes in AMRMClient to support labels

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166087#comment-14166087
 ] 

Hadoop QA commented on YARN-2501:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674060/YARN-2501-20141009.1.patch
  against trunk revision d8d628d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5352//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5352//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5352//console

This message is automatically generated.

 [YARN-796] Changes in AMRMClient to support labels
 --

 Key: YARN-2501
 URL: https://issues.apache.org/jira/browse/YARN-2501
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2501-20141009.1.patch, YARN-2501.patch


 Changes in AMRMClient to support labels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166088#comment-14166088
 ] 

Zhijie Shen commented on YARN-2671:
---

+1. Will commit the patch.

 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Assignee: Wangda Tan
Priority: Blocker
 Attachments: YARN-2617-20141009.1.patch


 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-2664) Improve RM webapp to expose info about reservations.

2014-10-09 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino reassigned YARN-2664:
--

Assignee: Carlo Curino

 Improve RM webapp to expose info about reservations.
 

 Key: YARN-2664
 URL: https://issues.apache.org/jira/browse/YARN-2664
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: PlannerPage_screenshot.pdf, YARN-2664.patch


 YARN-1051 provides a new functionality in the RM to ask for reservation on 
 resources. Exposing this through the webapp GUI is important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1990) Track time-to-allocation for different size containers

2014-10-09 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166099#comment-14166099
 ] 

Carlo Curino commented on YARN-1990:


YARN-1051 is committed. If someone tries it and finds issues with the 
time-to-allocation, we can re-hash this. Otherwise we can just close this in a 
while.

 Track time-to-allocation for different size containers 
 ---

 Key: YARN-1990
 URL: https://issues.apache.org/jira/browse/YARN-1990
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Carlo Curino
Assignee: Carlo Curino

 Allocation of Large Containers are notoriously problematic, as smaller 
 containers can more easily grab resources. 
 The proposal for this JIRA is to maintain a map of container sizes, and 
 time-to-allocation, that can be used as:
 * general insight on cluster behavior, 
 * to inform the reservation-system, and allows us to account for delays in 
 allocation, so that the user reservation is respected regardless the size of 
 containers requested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2673) Add retry for timeline client

2014-10-09 Thread Li Lu (JIRA)
Li Lu created YARN-2673:
---

 Summary: Add retry for timeline client
 Key: YARN-2673
 URL: https://issues.apache.org/jira/browse/YARN-2673
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


Timeline client now does not handle the case gracefully when the server is 
down. Jobs from distributed shell may fail due to ATS restart. We may need to 
add some retry mechanisms to the client. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2664) Improve RM webapp to expose info about reservations.

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166103#comment-14166103
 ] 

Hadoop QA commented on YARN-2664:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674052/PlannerPage_screenshot.pdf
  against trunk revision e532ed8.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5354//console

This message is automatically generated.

 Improve RM webapp to expose info about reservations.
 

 Key: YARN-2664
 URL: https://issues.apache.org/jira/browse/YARN-2664
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: PlannerPage_screenshot.pdf, YARN-2664.patch


 YARN-1051 provides a new functionality in the RM to ask for reservation on 
 resources. Exposing this through the webapp GUI is important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2617) NM does not need to send finished container whose APP is not running to RM

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166104#comment-14166104
 ] 

Hudson commented on YARN-2617:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6230 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6230/])
YARN-2617. Fixed ApplicationSubmissionContext to still set resource for 
backward compatibility. Contributed by Wangda Tan. (zjshen: rev 
e532ed8faa8db4b008a5b8d3f82b48a1b314fa6c)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java


 NM does not need to send finished container whose APP is not running to RM
 --

 Key: YARN-2617
 URL: https://issues.apache.org/jira/browse/YARN-2617
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Jun Gong
Assignee: Jun Gong
 Fix For: 2.6.0

 Attachments: YARN-2617.2.patch, YARN-2617.3.patch, YARN-2617.4.patch, 
 YARN-2617.5.patch, YARN-2617.5.patch, YARN-2617.5.patch, YARN-2617.6.patch, 
 YARN-2617.patch


 We([~chenchun]) are testing RM work preserving restart and found the 
 following logs when we ran a simple MapReduce task PI. NM continuously 
 reported completed containers whose Application had already finished while AM 
 had finished. 
 {code}
 2014-09-26 17:00:42,228 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
 Null container completed...
 2014-09-26 17:00:42,228 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
 Null container completed...
 2014-09-26 17:00:43,230 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
 Null container completed...
 2014-09-26 17:00:43,230 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
 Null container completed...
 2014-09-26 17:00:44,233 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
 Null container completed...
 2014-09-26 17:00:44,233 INFO 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
 Null container completed...
 {code}
 In the patch for YARN-1372, ApplicationImpl on NM should guarantee to  clean 
 up already completed applications. But it will only remove appId from  
 'app.context.getApplications()' when ApplicaitonImpl received evnet 
 'ApplicationEventType.APPLICATION_LOG_HANDLING_FINISHED' , however NM might 
 receive this event for a long time or could not receive. 
 * For NonAggregatingLogHandler, it wait for 
 YarnConfiguration.NM_LOG_RETAIN_SECONDS which is 3 * 60 * 60 sec by default, 
 then it will be scheduled to delete Application logs and send the event.
 * For LogAggregationService, it might fail(e.g. if user does not have HDFS 
 write permission), and it will not send the event.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2672) Improve Gridmix (synthetic generator + reservation support)

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166111#comment-14166111
 ] 

Hadoop QA commented on YARN-2672:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12674061/YARN-2672.patch
  against trunk revision e532ed8.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5353//console

This message is automatically generated.

 Improve Gridmix (synthetic generator + reservation support)
 ---

 Key: YARN-2672
 URL: https://issues.apache.org/jira/browse/YARN-2672
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler, fairscheduler, resourcemanager
Reporter: Carlo Curino
Assignee: Carlo Curino
 Attachments: YARN-2672.patch


 This JIRA proposes an enhancement of Gridmix that contains:
 1) a synthetic generator to produce load without the need of a trace, but 
 based on distributions
 2) include negotiation of reservations (to test YARN-1051). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2673) Add retry for timeline client

2014-10-09 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-2673:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-1530

 Add retry for timeline client
 -

 Key: YARN-2673
 URL: https://issues.apache.org/jira/browse/YARN-2673
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu

 Timeline client now does not handle the case gracefully when the server is 
 down. Jobs from distributed shell may fail due to ATS restart. We may need to 
 add some retry mechanisms to the client. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2494) [YARN-796] Node label manager API and storage implementations

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166135#comment-14166135
 ] 

Hadoop QA commented on YARN-2494:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674032/YARN-2494.20141009-1.patch
  against trunk revision d8d628d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerQueueACLs
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebappAuthentication
  
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokenAuthentication

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5350//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5350//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5350//console

This message is automatically generated.

 [YARN-796] Node label manager API and storage implementations
 -

 Key: YARN-2494
 URL: https://issues.apache.org/jira/browse/YARN-2494
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2494.20141009-1.patch, YARN-2494.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch


 This JIRA includes APIs and storage implementations of node label manager,
 NodeLabelManager is an abstract class used to manage labels of nodes in the 
 cluster, it has APIs to query/modify
 - Nodes according to given label
 - Labels according to given hostname
 - Add/remove labels
 - Set labels of nodes in the cluster
 - Persist/recover changes of labels/labels-on-nodes to/from storage
 And it has two implementations to store modifications
 - Memory based storage: It will not persist changes, so all labels will be 
 lost when RM restart
 - FileSystem based storage: It will persist/recover to/from FileSystem (like 
 HDFS), and all labels and labels-on-nodes will be recovered upon RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2662) TestCgroupsLCEResourcesHandler leaks file descriptors.

2014-10-09 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated YARN-2662:
--
Hadoop Flags: Reviewed

+1 patch looks good.

 TestCgroupsLCEResourcesHandler leaks file descriptors.
 --

 Key: YARN-2662
 URL: https://issues.apache.org/jira/browse/YARN-2662
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: YARN-2662.1.patch


 {{TestCgroupsLCEResourcesHandler}} includes tests that write and read values 
 from the various cgroups files.  After the tests read from a file, they do 
 not close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166194#comment-14166194
 ] 

Wangda Tan commented on YARN-2671:
--

Thx [~zjshen] for review and commit!

 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Assignee: Wangda Tan
Priority: Blocker
 Fix For: 2.6.0

 Attachments: YARN-2617-20141009.1.patch


 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2494) [YARN-796] Node label manager API and storage implementations

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2494:
-
Attachment: YARN-2494.20141009-2.patch

Failures should be resolved by YARN-2671 already, resubmit patch to kick 
Jenkins.

 [YARN-796] Node label manager API and storage implementations
 -

 Key: YARN-2494
 URL: https://issues.apache.org/jira/browse/YARN-2494
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2494.20141009-1.patch, YARN-2494.20141009-2.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, YARN-2494.patch


 This JIRA includes APIs and storage implementations of node label manager,
 NodeLabelManager is an abstract class used to manage labels of nodes in the 
 cluster, it has APIs to query/modify
 - Nodes according to given label
 - Labels according to given hostname
 - Add/remove labels
 - Set labels of nodes in the cluster
 - Persist/recover changes of labels/labels-on-nodes to/from storage
 And it has two implementations to store modifications
 - Memory based storage: It will not persist changes, so all labels will be 
 lost when RM restart
 - FileSystem based storage: It will persist/recover to/from FileSystem (like 
 HDFS), and all labels and labels-on-nodes will be recovered upon RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2056) Disable preemption at Queue level

2014-10-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166207#comment-14166207
 ] 

Wangda Tan commented on YARN-2056:
--

Hi [~eepayne],
Thanks for your explanation, I missed the {{offer}} changes before, I think it 
works, but I suggest to make the algorithm more straight-forward. Looking 
forward for next patch :)

Thanks,
Wangda

 Disable preemption at Queue level
 -

 Key: YARN-2056
 URL: https://issues.apache.org/jira/browse/YARN-2056
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Mayank Bansal
Assignee: Eric Payne
 Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, 
 YARN-2056.201408310117.txt, YARN-2056.201409022208.txt, 
 YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, 
 YARN-2056.201409232329.txt, YARN-2056.201409242210.txt


 We need to be able to disable preemption at individual queue level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2674) Distributed shell AM may re-launch containers if RM work preserving restart happens

2014-10-09 Thread Chun Chen (JIRA)
Chun Chen created YARN-2674:
---

 Summary: Distributed shell AM may re-launch containers if RM work 
preserving restart happens
 Key: YARN-2674
 URL: https://issues.apache.org/jira/browse/YARN-2674
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Chun Chen


Currently, if RM work preserving restart happens while distributed shell is 
running, distribute shell AM may re-launch all the containers, including 
new/running/complete. We must make sure it won't re-launch the running/complete 
containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2674) Distributed shell AM may re-launch containers if RM work preserving restart happens

2014-10-09 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated YARN-2674:

Attachment: YARN-2674.1.patch

 Distributed shell AM may re-launch containers if RM work preserving restart 
 happens
 ---

 Key: YARN-2674
 URL: https://issues.apache.org/jira/browse/YARN-2674
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Chun Chen
 Attachments: YARN-2674.1.patch


 Currently, if RM work preserving restart happens while distributed shell is 
 running, distribute shell AM may re-launch all the containers, including 
 new/running/complete. We must make sure it won't re-launch the 
 running/complete containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2674) Distributed shell AM may re-launch containers if RM work preserving restart happens

2014-10-09 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated YARN-2674:

Description: 
Currently, if RM work preserving restart happens while distributed shell is 
running, distribute shell AM may re-launch all the containers, including 
new/running/complete. We must make sure it won't re-launch the running/complete 
containers.
We need to remove allocated containers from AMRMClientImpl#remoteRequestsTable 
once AM receive them from RM. 

  was:Currently, if RM work preserving restart happens while distributed shell 
is running, distribute shell AM may re-launch all the containers, including 
new/running/complete. We must make sure it won't re-launch the running/complete 
containers.


 Distributed shell AM may re-launch containers if RM work preserving restart 
 happens
 ---

 Key: YARN-2674
 URL: https://issues.apache.org/jira/browse/YARN-2674
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Chun Chen
 Attachments: YARN-2674.1.patch


 Currently, if RM work preserving restart happens while distributed shell is 
 running, distribute shell AM may re-launch all the containers, including 
 new/running/complete. We must make sure it won't re-launch the 
 running/complete containers.
 We need to remove allocated containers from 
 AMRMClientImpl#remoteRequestsTable once AM receive them from RM. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2501) [YARN-796] Changes in AMRMClient to support labels

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2501:
-
Attachment: YARN-2501-20141009.2.patch

I found my last patch doesn't update anything .. uploaded a new patch

 [YARN-796] Changes in AMRMClient to support labels
 --

 Key: YARN-2501
 URL: https://issues.apache.org/jira/browse/YARN-2501
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2501-20141009.1.patch, YARN-2501-20141009.2.patch, 
 YARN-2501.patch


 Changes in AMRMClient to support labels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2502) [YARN-796] Changes in distributed shell to support specify labels

2014-10-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2502:
-
Attachment: YARN-2502-20141009.2.patch

I found my last patch doesn't update anything .. this is a real update.

 [YARN-796] Changes in distributed shell to support specify labels
 -

 Key: YARN-2502
 URL: https://issues.apache.org/jira/browse/YARN-2502
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2502-20141009.1.patch, YARN-2502-20141009.2.patch, 
 YARN-2502.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2494) [YARN-796] Node label manager API and storage implementations

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166358#comment-14166358
 ] 

Hadoop QA commented on YARN-2494:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674087/YARN-2494.20141009-2.patch
  against trunk revision cbd21fd.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5355//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5355//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5355//console

This message is automatically generated.

 [YARN-796] Node label manager API and storage implementations
 -

 Key: YARN-2494
 URL: https://issues.apache.org/jira/browse/YARN-2494
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2494.20141009-1.patch, YARN-2494.20141009-2.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, 
 YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, YARN-2494.patch


 This JIRA includes APIs and storage implementations of node label manager,
 NodeLabelManager is an abstract class used to manage labels of nodes in the 
 cluster, it has APIs to query/modify
 - Nodes according to given label
 - Labels according to given hostname
 - Add/remove labels
 - Set labels of nodes in the cluster
 - Persist/recover changes of labels/labels-on-nodes to/from storage
 And it has two implementations to store modifications
 - Memory based storage: It will not persist changes, so all labels will be 
 lost when RM restart
 - FileSystem based storage: It will persist/recover to/from FileSystem (like 
 HDFS), and all labels and labels-on-nodes will be recovered upon RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2671) ApplicationSubmissionContext change breaks the existing app submission

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166357#comment-14166357
 ] 

Hudson commented on YARN-2671:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6232 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6232/])
YARN-2671. Fix the Jira number in the change log. (zjshen: rev 
5b12df6587eb4f37d09c9ffc35a0ea59694df831)
* hadoop-yarn-project/CHANGES.txt


 ApplicationSubmissionContext change breaks the existing app submission
 --

 Key: YARN-2671
 URL: https://issues.apache.org/jira/browse/YARN-2671
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Zhijie Shen
Assignee: Wangda Tan
Priority: Blocker
 Fix For: 2.6.0

 Attachments: YARN-2617-20141009.1.patch


 After YARN-2493, app submission goes wrong with the following exception:
 {code}
 2014-10-09 15:50:35,774 WARN  [297524352@qtp-1314143300-2 - 
 /ws/v1/cluster/apps] webapp.GenericExceptionHandler 
 (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:194)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateResourceRequest(RMAppManager.java:390)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:346)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:273)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:570)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:896)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices$2.run(RMWebServices.java:1)
 {code}
 This is because resource is putting into ResourceRequest of 
 ApplicationSubmissionContext, but not directly into 
 ApplicationSubmissionContext, therefore the sanity check won't get resource 
 object from context.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2631) Modify DistributedShell to enable LogAggregationContext

2014-10-09 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-2631:

Attachment: YARN-2631.1.1.patch

 Modify DistributedShell to enable LogAggregationContext
 ---

 Key: YARN-2631
 URL: https://issues.apache.org/jira/browse/YARN-2631
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager, resourcemanager
Reporter: Xuan Gong
Assignee: Xuan Gong
 Attachments: YARN-2631.1.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2501) [YARN-796] Changes in AMRMClient to support labels

2014-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166383#comment-14166383
 ] 

Hadoop QA commented on YARN-2501:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674093/YARN-2501-20141009.2.patch
  against trunk revision 5b12df6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client:

  org.apache.hadoop.yarn.client.api.impl.TestAMRMClient

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5357//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5357//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5357//console

This message is automatically generated.

 [YARN-796] Changes in AMRMClient to support labels
 --

 Key: YARN-2501
 URL: https://issues.apache.org/jira/browse/YARN-2501
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-2501-20141009.1.patch, YARN-2501-20141009.2.patch, 
 YARN-2501.patch


 Changes in AMRMClient to support labels



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2656) RM web services authentication filter should add support for proxy user

2014-10-09 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166414#comment-14166414
 ] 

Zhijie Shen commented on YARN-2656:
---

[~vvasudev], thanks for the patch. It looks good to me overall. Based on this 
patch I made code polishing as well as the following changes:

1. Keep the filter name being RMAuthenticationFilter as we don't change the 
class name.

2. Change proxyuser prefix to yarn.resourcemanager. The other 
DelegationTokenAuthenticationFilter use cases take the specific prefix as well 
instead of the common one hadoop. The benefit should be different use cases 
can have their individual proxyuser setting.

Upload a new patch accordingly.

One more thing: RMAuthenticationHandler seems to be useless, we may want to 
discard it. Let's do it separately?

 RM web services authentication filter should add support for proxy user
 ---

 Key: YARN-2656
 URL: https://issues.apache.org/jira/browse/YARN-2656
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-yarn-2656.0.patch, apache-yarn-2656.1.patch, 
 apache-yarn-2656.2.patch


 The DelegationTokenAuthenticationFilter adds support for doAs functionality. 
 The RMAuthenticationFilter should expose this as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2656) RM web services authentication filter should add support for proxy user

2014-10-09 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated YARN-2656:
--
Attachment: YARN-2656.3.patch

 RM web services authentication filter should add support for proxy user
 ---

 Key: YARN-2656
 URL: https://issues.apache.org/jira/browse/YARN-2656
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: YARN-2656.3.patch, apache-yarn-2656.0.patch, 
 apache-yarn-2656.1.patch, apache-yarn-2656.2.patch


 The DelegationTokenAuthenticationFilter adds support for doAs functionality. 
 The RMAuthenticationFilter should expose this as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2662) TestCgroupsLCEResourcesHandler leaks file descriptors.

2014-10-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14166429#comment-14166429
 ] 

Hudson commented on YARN-2662:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6235 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6235/])
YARN-2662. TestCgroupsLCEResourcesHandler leaks file descriptors. Contributed 
by Chris Nauroth. (cnauroth: rev d3afd730acfa380ab5032be5ee296c5d73744518)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java
* hadoop-yarn-project/CHANGES.txt


 TestCgroupsLCEResourcesHandler leaks file descriptors.
 --

 Key: YARN-2662
 URL: https://issues.apache.org/jira/browse/YARN-2662
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 2.6.0

 Attachments: YARN-2662.1.patch


 {{TestCgroupsLCEResourcesHandler}} includes tests that write and read values 
 from the various cgroups files.  After the tests read from a file, they do 
 not close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)