[jira] [Commented] (YARN-3275) CapacityScheduler: Preemption happening on non-preemptable queues

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351519#comment-14351519
 ] 

Hudson commented on YARN-3275:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #125 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/125/])
YARN-3275. CapacityScheduler: Preemption happening on non-preemptable queues. 
Contributed by Eric Payne (jlowe: rev 27e8ea820fab8dce59f4db9814e73bd60c1d4ef1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* hadoop-yarn-project/CHANGES.txt


 CapacityScheduler: Preemption happening on non-preemptable queues
 -

 Key: YARN-3275
 URL: https://issues.apache.org/jira/browse/YARN-3275
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne
  Labels: capacity-scheduler
 Fix For: 2.7.0

 Attachments: YARN-3275.v1.txt, YARN-3275.v2.txt


 YARN-2056 introduced the ability to turn preemption on and off at the queue 
 level. In cases where a queue goes over its absolute max capacity (YARN-3243, 
 for example), containers can be preempted from that queue, even though the 
 queue is marked as non-preemptable.
 We are using this feature in large, busy clusters and seeing this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2190) Add CPU and memory limit options to the default container executor for Windows containers

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351521#comment-14351521
 ] 

Hudson commented on YARN-2190:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #125 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/125/])
YARN-2190. Added CPU and memory limit options to the default container executor 
for Windows containers. Contributed by Chuan Liu (jianhe: rev 
21101c01f242439ec8ec40fb3a9ab1991ae0adc7)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
* hadoop-yarn-project/CHANGES.txt
* BUILDING.txt
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props


 Add CPU and memory limit options to the default container executor for 
 Windows containers
 -

 Key: YARN-2190
 URL: https://issues.apache.org/jira/browse/YARN-2190
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 2.7.0

 Attachments: YARN-2190-prototype.patch, YARN-2190.1.patch, 
 YARN-2190.10.patch, YARN-2190.11.patch, YARN-2190.12.patch, 
 YARN-2190.13.patch, YARN-2190.2.patch, YARN-2190.3.patch, YARN-2190.4.patch, 
 YARN-2190.5.patch, YARN-2190.6.patch, YARN-2190.7.patch, YARN-2190.8.patch, 
 YARN-2190.9.patch


 Yarn default container executor on Windows does not set the resource limit on 
 the containers currently. The memory limit is enforced by a separate 
 monitoring thread. The container implementation on Windows uses Job Object 
 right now. The latest Windows (8 or later) API allows CPU and memory limits 
 on the job objects. We want to add the new options to the executor that can 
 set the limits on job objects thus provides resource enforcement at OS level.
 http://msdn.microsoft.com/en-us/library/windows/desktop/ms686216(v=vs.85).aspx



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3227) Timeline renew delegation token fails when RM user's TGT is expired

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351518#comment-14351518
 ] 

Hudson commented on YARN-3227:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #125 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/125/])
YARN-3227. Timeline renew delegation token fails when RM user's TGT is (xgong: 
rev d1abc5d4fc00bb1b226066684556ba16ace71744)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
* hadoop-yarn-project/CHANGES.txt


 Timeline renew delegation token fails when RM user's TGT is expired
 ---

 Key: YARN-3227
 URL: https://issues.apache.org/jira/browse/YARN-3227
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jonathan Eagles
Assignee: Zhijie Shen
Priority: Critical
 Fix For: 2.7.0

 Attachments: YARN-3227.1.patch, YARN-3227.test.patch


 When the RM user's kerberos TGT is expired, the RM renew delegation token 
 operation fails as part of job submission. Expected behavior is that RM will 
 relogin to get a new TGT.
 {quote}
 2015-02-06 18:54:05,617 [DelegationTokenRenewer #25954] WARN
 security.DelegationTokenRenewer: Unable to add the application to the
 delegation token renewer.
 java.io.IOException: Failed to renew token: Kind: TIMELINE_DELEGATION_TOKEN,
 Service: timelineserver.example.com:4080, Ident: (owner=user,
 renewer=rmuser, realUser=oozie, issueDate=1423248845528,
 maxDate=1423853645528, sequenceNumber=9716, masterKeyId=9)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:443)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:77)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:808)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:789)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: HTTP status [401], message [Unauthorized]
 at
 org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:286)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.renewDelegationToken(DelegationTokenAuthenticator.java:211)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.renewDelegationToken(DelegationTokenAuthenticatedURL.java:414)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:374)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:360)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$4.run(TimelineClientImpl.java:429)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:161)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.operateDelegationToken(TimelineClientImpl.java:444)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.renewDelegationToken(TimelineClientImpl.java:378)
 at
 org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier$Renewer.renew(TimelineDelegationTokenIdentifier.java:81)
 at org.apache.hadoop.security.token.Token.renew(Token.java:377)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:532)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:529)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1621) Add CLI to list rows of task attempt ID, container ID, host of container, state of container

2015-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351586#comment-14351586
 ] 

Hadoop QA commented on YARN-1621:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12703233/YARN-1621.5.patch
  against trunk revision 608ebd5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 5 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.client.api.impl.TestAMRMClient

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6888//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/6888//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6888//console

This message is automatically generated.

 Add CLI to list rows of task attempt ID, container ID, host of container, 
 state of container
 --

 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
Assignee: Bartosz Ługowski
 Attachments: YARN-1621.1.patch, YARN-1621.2.patch, YARN-1621.3.patch, 
 YARN-1621.4.patch, YARN-1621.5.patch


 As more applications are moved to YARN, we need generic CLI to list rows of 
 task attempt ID, container ID, host of container, state of container. Today 
 if YARN application running in a container does hang, there is no way to find 
 out more info because a user does not know where each attempt is running in.
 For each running application, it is useful to differentiate between 
 running/succeeded/failed/killed containers.
  
 {code:title=proposed yarn cli}
 $ yarn application -list-containers -applicationId appId [-containerState 
 state of container]
 where containerState is optional filter to list container in given state only.
 container state can be running/succeeded/killed/failed/all.
 A user can specify more than one container state at once e.g. KILLED,FAILED.
 task attempt ID container ID host of container state of container 
 {code}
 CLI should work with running application/completed application. If a 
 container runs many task attempts, all attempts should be shown. That will 
 likely be the case of Tez container-reuse application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3243) CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits.

2015-03-07 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-3243:
-

Assignee: Jian He  (was: Wangda Tan)

 CapacityScheduler should pass headroom from parent to children to make sure 
 ParentQueue obey its capacity limits.
 -

 Key: YARN-3243
 URL: https://issues.apache.org/jira/browse/YARN-3243
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Reporter: Wangda Tan
Assignee: Jian He
 Attachments: YARN-3243.1.patch


 Now CapacityScheduler has some issues to make sure ParentQueue always obeys 
 its capacity limits, for example:
 1) When allocating container of a parent queue, it will only check 
 parentQueue.usage  parentQueue.max. If leaf queue allocated a container.size 
  (parentQueue.max - parentQueue.usage), parent queue can excess its max 
 resource limit, as following example:
 {code}
 A  (usage=54, max=55)
/ \
   A1 A2 (usage=1, max=55)
 (usage=53, max=53)
 {code}
 Queue-A2 is able to allocate container since its usage  max, but if we do 
 that, A's usage can excess A.max.
 2) When doing continous reservation check, parent queue will only tell 
 children you need unreserve *some* resource, so that I will less than my 
 maximum resource, but it will not tell how many resource need to be 
 unreserved. This may lead to parent queue excesses configured maximum 
 capacity as well.
 With YARN-3099/YARN-3124, now we have {{ResourceUsage}} class in each class, 
 *here is my proposal*:
 - ParentQueue will set its children's ResourceUsage.headroom, which means, 
 *maximum resource its children can allocate*.
 - ParentQueue will set its children's headroom to be (saying parent's name is 
 qA): min(qA.headroom, qA.max - qA.used). This will make sure qA's 
 ancestors' capacity will be enforced as well (qA.headroom is set by qA's 
 parent).
 - {{needToUnReserve}} is not necessary, instead, children can get how much 
 resource need to be unreserved to keep its parent's resource limit.
 - More over, with this, YARN-3026 will make a clear boundary between 
 LeafQueue and FiCaSchedulerApp, headroom will consider user-limit, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3275) CapacityScheduler: Preemption happening on non-preemptable queues

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351639#comment-14351639
 ] 

Hudson commented on YARN-3275:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2075 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2075/])
YARN-3275. CapacityScheduler: Preemption happening on non-preemptable queues. 
Contributed by Eric Payne (jlowe: rev 27e8ea820fab8dce59f4db9814e73bd60c1d4ef1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java


 CapacityScheduler: Preemption happening on non-preemptable queues
 -

 Key: YARN-3275
 URL: https://issues.apache.org/jira/browse/YARN-3275
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne
  Labels: capacity-scheduler
 Fix For: 2.7.0

 Attachments: YARN-3275.v1.txt, YARN-3275.v2.txt


 YARN-2056 introduced the ability to turn preemption on and off at the queue 
 level. In cases where a queue goes over its absolute max capacity (YARN-3243, 
 for example), containers can be preempted from that queue, even though the 
 queue is marked as non-preemptable.
 We are using this feature in large, busy clusters and seeing this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3227) Timeline renew delegation token fails when RM user's TGT is expired

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351638#comment-14351638
 ] 

Hudson commented on YARN-3227:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2075 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2075/])
YARN-3227. Timeline renew delegation token fails when RM user's TGT is (xgong: 
rev d1abc5d4fc00bb1b226066684556ba16ace71744)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
* hadoop-yarn-project/CHANGES.txt


 Timeline renew delegation token fails when RM user's TGT is expired
 ---

 Key: YARN-3227
 URL: https://issues.apache.org/jira/browse/YARN-3227
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jonathan Eagles
Assignee: Zhijie Shen
Priority: Critical
 Fix For: 2.7.0

 Attachments: YARN-3227.1.patch, YARN-3227.test.patch


 When the RM user's kerberos TGT is expired, the RM renew delegation token 
 operation fails as part of job submission. Expected behavior is that RM will 
 relogin to get a new TGT.
 {quote}
 2015-02-06 18:54:05,617 [DelegationTokenRenewer #25954] WARN
 security.DelegationTokenRenewer: Unable to add the application to the
 delegation token renewer.
 java.io.IOException: Failed to renew token: Kind: TIMELINE_DELEGATION_TOKEN,
 Service: timelineserver.example.com:4080, Ident: (owner=user,
 renewer=rmuser, realUser=oozie, issueDate=1423248845528,
 maxDate=1423853645528, sequenceNumber=9716, masterKeyId=9)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:443)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:77)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:808)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:789)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: HTTP status [401], message [Unauthorized]
 at
 org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:286)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.renewDelegationToken(DelegationTokenAuthenticator.java:211)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.renewDelegationToken(DelegationTokenAuthenticatedURL.java:414)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:374)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:360)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$4.run(TimelineClientImpl.java:429)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:161)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.operateDelegationToken(TimelineClientImpl.java:444)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.renewDelegationToken(TimelineClientImpl.java:378)
 at
 org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier$Renewer.renew(TimelineDelegationTokenIdentifier.java:81)
 at org.apache.hadoop.security.token.Token.renew(Token.java:377)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:532)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:529)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2190) Add CPU and memory limit options to the default container executor for Windows containers

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351641#comment-14351641
 ] 

Hudson commented on YARN-2190:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2075 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2075/])
YARN-2190. Added CPU and memory limit options to the default container executor 
for Windows containers. Contributed by Chuan Liu (jianhe: rev 
21101c01f242439ec8ec40fb3a9ab1991ae0adc7)
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* BUILDING.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


 Add CPU and memory limit options to the default container executor for 
 Windows containers
 -

 Key: YARN-2190
 URL: https://issues.apache.org/jira/browse/YARN-2190
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 2.7.0

 Attachments: YARN-2190-prototype.patch, YARN-2190.1.patch, 
 YARN-2190.10.patch, YARN-2190.11.patch, YARN-2190.12.patch, 
 YARN-2190.13.patch, YARN-2190.2.patch, YARN-2190.3.patch, YARN-2190.4.patch, 
 YARN-2190.5.patch, YARN-2190.6.patch, YARN-2190.7.patch, YARN-2190.8.patch, 
 YARN-2190.9.patch


 Yarn default container executor on Windows does not set the resource limit on 
 the containers currently. The memory limit is enforced by a separate 
 monitoring thread. The container implementation on Windows uses Job Object 
 right now. The latest Windows (8 or later) API allows CPU and memory limits 
 on the job objects. We want to add the new options to the executor that can 
 set the limits on job objects thus provides resource enforcement at OS level.
 http://msdn.microsoft.com/en-us/library/windows/desktop/ms686216(v=vs.85).aspx



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2190) Add CPU and memory limit options to the default container executor for Windows containers

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351529#comment-14351529
 ] 

Hudson commented on YARN-2190:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #859 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/859/])
YARN-2190. Added CPU and memory limit options to the default container executor 
for Windows containers. Contributed by Chuan Liu (jianhe: rev 
21101c01f242439ec8ec40fb3a9ab1991ae0adc7)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* hadoop-yarn-project/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* BUILDING.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java


 Add CPU and memory limit options to the default container executor for 
 Windows containers
 -

 Key: YARN-2190
 URL: https://issues.apache.org/jira/browse/YARN-2190
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 2.7.0

 Attachments: YARN-2190-prototype.patch, YARN-2190.1.patch, 
 YARN-2190.10.patch, YARN-2190.11.patch, YARN-2190.12.patch, 
 YARN-2190.13.patch, YARN-2190.2.patch, YARN-2190.3.patch, YARN-2190.4.patch, 
 YARN-2190.5.patch, YARN-2190.6.patch, YARN-2190.7.patch, YARN-2190.8.patch, 
 YARN-2190.9.patch


 Yarn default container executor on Windows does not set the resource limit on 
 the containers currently. The memory limit is enforced by a separate 
 monitoring thread. The container implementation on Windows uses Job Object 
 right now. The latest Windows (8 or later) API allows CPU and memory limits 
 on the job objects. We want to add the new options to the executor that can 
 set the limits on job objects thus provides resource enforcement at OS level.
 http://msdn.microsoft.com/en-us/library/windows/desktop/ms686216(v=vs.85).aspx



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2190) Add CPU and memory limit options to the default container executor for Windows containers

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351610#comment-14351610
 ] 

Hudson commented on YARN-2190:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #116 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/116/])
YARN-2190. Added CPU and memory limit options to the default container executor 
for Windows containers. Contributed by Chuan Liu (jianhe: rev 
21101c01f242439ec8ec40fb3a9ab1991ae0adc7)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
* BUILDING.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/task.c


 Add CPU and memory limit options to the default container executor for 
 Windows containers
 -

 Key: YARN-2190
 URL: https://issues.apache.org/jira/browse/YARN-2190
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 2.7.0

 Attachments: YARN-2190-prototype.patch, YARN-2190.1.patch, 
 YARN-2190.10.patch, YARN-2190.11.patch, YARN-2190.12.patch, 
 YARN-2190.13.patch, YARN-2190.2.patch, YARN-2190.3.patch, YARN-2190.4.patch, 
 YARN-2190.5.patch, YARN-2190.6.patch, YARN-2190.7.patch, YARN-2190.8.patch, 
 YARN-2190.9.patch


 Yarn default container executor on Windows does not set the resource limit on 
 the containers currently. The memory limit is enforced by a separate 
 monitoring thread. The container implementation on Windows uses Job Object 
 right now. The latest Windows (8 or later) API allows CPU and memory limits 
 on the job objects. We want to add the new options to the executor that can 
 set the limits on job objects thus provides resource enforcement at OS level.
 http://msdn.microsoft.com/en-us/library/windows/desktop/ms686216(v=vs.85).aspx



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3275) CapacityScheduler: Preemption happening on non-preemptable queues

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351608#comment-14351608
 ] 

Hudson commented on YARN-3275:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #116 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/116/])
YARN-3275. CapacityScheduler: Preemption happening on non-preemptable queues. 
Contributed by Eric Payne (jlowe: rev 27e8ea820fab8dce59f4db9814e73bd60c1d4ef1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java


 CapacityScheduler: Preemption happening on non-preemptable queues
 -

 Key: YARN-3275
 URL: https://issues.apache.org/jira/browse/YARN-3275
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne
  Labels: capacity-scheduler
 Fix For: 2.7.0

 Attachments: YARN-3275.v1.txt, YARN-3275.v2.txt


 YARN-2056 introduced the ability to turn preemption on and off at the queue 
 level. In cases where a queue goes over its absolute max capacity (YARN-3243, 
 for example), containers can be preempted from that queue, even though the 
 queue is marked as non-preemptable.
 We are using this feature in large, busy clusters and seeing this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3275) CapacityScheduler: Preemption happening on non-preemptable queues

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351597#comment-14351597
 ] 

Hudson commented on YARN-3275:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2057 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2057/])
YARN-3275. CapacityScheduler: Preemption happening on non-preemptable queues. 
Contributed by Eric Payne (jlowe: rev 27e8ea820fab8dce59f4db9814e73bd60c1d4ef1)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java


 CapacityScheduler: Preemption happening on non-preemptable queues
 -

 Key: YARN-3275
 URL: https://issues.apache.org/jira/browse/YARN-3275
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne
  Labels: capacity-scheduler
 Fix For: 2.7.0

 Attachments: YARN-3275.v1.txt, YARN-3275.v2.txt


 YARN-2056 introduced the ability to turn preemption on and off at the queue 
 level. In cases where a queue goes over its absolute max capacity (YARN-3243, 
 for example), containers can be preempted from that queue, even though the 
 queue is marked as non-preemptable.
 We are using this feature in large, busy clusters and seeing this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2190) Add CPU and memory limit options to the default container executor for Windows containers

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351599#comment-14351599
 ] 

Hudson commented on YARN-2190:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2057 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2057/])
YARN-2190. Added CPU and memory limit options to the default container executor 
for Windows containers. Contributed by Chuan Liu (jianhe: rev 
21101c01f242439ec8ec40fb3a9ab1991ae0adc7)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* BUILDING.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java


 Add CPU and memory limit options to the default container executor for 
 Windows containers
 -

 Key: YARN-2190
 URL: https://issues.apache.org/jira/browse/YARN-2190
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 2.7.0

 Attachments: YARN-2190-prototype.patch, YARN-2190.1.patch, 
 YARN-2190.10.patch, YARN-2190.11.patch, YARN-2190.12.patch, 
 YARN-2190.13.patch, YARN-2190.2.patch, YARN-2190.3.patch, YARN-2190.4.patch, 
 YARN-2190.5.patch, YARN-2190.6.patch, YARN-2190.7.patch, YARN-2190.8.patch, 
 YARN-2190.9.patch


 Yarn default container executor on Windows does not set the resource limit on 
 the containers currently. The memory limit is enforced by a separate 
 monitoring thread. The container implementation on Windows uses Job Object 
 right now. The latest Windows (8 or later) API allows CPU and memory limits 
 on the job objects. We want to add the new options to the executor that can 
 set the limits on job objects thus provides resource enforcement at OS level.
 http://msdn.microsoft.com/en-us/library/windows/desktop/ms686216(v=vs.85).aspx



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3227) Timeline renew delegation token fails when RM user's TGT is expired

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351596#comment-14351596
 ] 

Hudson commented on YARN-3227:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2057 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2057/])
YARN-3227. Timeline renew delegation token fails when RM user's TGT is (xgong: 
rev d1abc5d4fc00bb1b226066684556ba16ace71744)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
* hadoop-yarn-project/CHANGES.txt


 Timeline renew delegation token fails when RM user's TGT is expired
 ---

 Key: YARN-3227
 URL: https://issues.apache.org/jira/browse/YARN-3227
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jonathan Eagles
Assignee: Zhijie Shen
Priority: Critical
 Fix For: 2.7.0

 Attachments: YARN-3227.1.patch, YARN-3227.test.patch


 When the RM user's kerberos TGT is expired, the RM renew delegation token 
 operation fails as part of job submission. Expected behavior is that RM will 
 relogin to get a new TGT.
 {quote}
 2015-02-06 18:54:05,617 [DelegationTokenRenewer #25954] WARN
 security.DelegationTokenRenewer: Unable to add the application to the
 delegation token renewer.
 java.io.IOException: Failed to renew token: Kind: TIMELINE_DELEGATION_TOKEN,
 Service: timelineserver.example.com:4080, Ident: (owner=user,
 renewer=rmuser, realUser=oozie, issueDate=1423248845528,
 maxDate=1423853645528, sequenceNumber=9716, masterKeyId=9)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:443)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:77)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:808)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:789)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: HTTP status [401], message [Unauthorized]
 at
 org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:286)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.renewDelegationToken(DelegationTokenAuthenticator.java:211)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.renewDelegationToken(DelegationTokenAuthenticatedURL.java:414)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:374)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:360)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$4.run(TimelineClientImpl.java:429)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:161)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.operateDelegationToken(TimelineClientImpl.java:444)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.renewDelegationToken(TimelineClientImpl.java:378)
 at
 org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier$Renewer.renew(TimelineDelegationTokenIdentifier.java:81)
 at org.apache.hadoop.security.token.Token.renew(Token.java:377)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:532)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:529)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3227) Timeline renew delegation token fails when RM user's TGT is expired

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351620#comment-14351620
 ] 

Hudson commented on YARN-3227:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #125 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/125/])
YARN-3227. Timeline renew delegation token fails when RM user's TGT is (xgong: 
rev d1abc5d4fc00bb1b226066684556ba16ace71744)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
* hadoop-yarn-project/CHANGES.txt


 Timeline renew delegation token fails when RM user's TGT is expired
 ---

 Key: YARN-3227
 URL: https://issues.apache.org/jira/browse/YARN-3227
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jonathan Eagles
Assignee: Zhijie Shen
Priority: Critical
 Fix For: 2.7.0

 Attachments: YARN-3227.1.patch, YARN-3227.test.patch


 When the RM user's kerberos TGT is expired, the RM renew delegation token 
 operation fails as part of job submission. Expected behavior is that RM will 
 relogin to get a new TGT.
 {quote}
 2015-02-06 18:54:05,617 [DelegationTokenRenewer #25954] WARN
 security.DelegationTokenRenewer: Unable to add the application to the
 delegation token renewer.
 java.io.IOException: Failed to renew token: Kind: TIMELINE_DELEGATION_TOKEN,
 Service: timelineserver.example.com:4080, Ident: (owner=user,
 renewer=rmuser, realUser=oozie, issueDate=1423248845528,
 maxDate=1423853645528, sequenceNumber=9716, masterKeyId=9)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:443)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:77)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:808)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:789)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: HTTP status [401], message [Unauthorized]
 at
 org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:286)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.renewDelegationToken(DelegationTokenAuthenticator.java:211)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.renewDelegationToken(DelegationTokenAuthenticatedURL.java:414)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:374)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:360)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$4.run(TimelineClientImpl.java:429)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:161)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.operateDelegationToken(TimelineClientImpl.java:444)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.renewDelegationToken(TimelineClientImpl.java:378)
 at
 org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier$Renewer.renew(TimelineDelegationTokenIdentifier.java:81)
 at org.apache.hadoop.security.token.Token.renew(Token.java:377)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:532)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:529)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3275) CapacityScheduler: Preemption happening on non-preemptable queues

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351621#comment-14351621
 ] 

Hudson commented on YARN-3275:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #125 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/125/])
YARN-3275. CapacityScheduler: Preemption happening on non-preemptable queues. 
Contributed by Eric Payne (jlowe: rev 27e8ea820fab8dce59f4db9814e73bd60c1d4ef1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java


 CapacityScheduler: Preemption happening on non-preemptable queues
 -

 Key: YARN-3275
 URL: https://issues.apache.org/jira/browse/YARN-3275
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne
  Labels: capacity-scheduler
 Fix For: 2.7.0

 Attachments: YARN-3275.v1.txt, YARN-3275.v2.txt


 YARN-2056 introduced the ability to turn preemption on and off at the queue 
 level. In cases where a queue goes over its absolute max capacity (YARN-3243, 
 for example), containers can be preempted from that queue, even though the 
 queue is marked as non-preemptable.
 We are using this feature in large, busy clusters and seeing this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2190) Add CPU and memory limit options to the default container executor for Windows containers

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351623#comment-14351623
 ] 

Hudson commented on YARN-2190:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #125 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/125/])
YARN-2190. Added CPU and memory limit options to the default container executor 
for Windows containers. Contributed by Chuan Liu (jianhe: rev 
21101c01f242439ec8ec40fb3a9ab1991ae0adc7)
* BUILDING.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestContainerExecutor.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/src/main/winutils/winutils.vcxproj
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


 Add CPU and memory limit options to the default container executor for 
 Windows containers
 -

 Key: YARN-2190
 URL: https://issues.apache.org/jira/browse/YARN-2190
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: nodemanager
Reporter: Chuan Liu
Assignee: Chuan Liu
 Fix For: 2.7.0

 Attachments: YARN-2190-prototype.patch, YARN-2190.1.patch, 
 YARN-2190.10.patch, YARN-2190.11.patch, YARN-2190.12.patch, 
 YARN-2190.13.patch, YARN-2190.2.patch, YARN-2190.3.patch, YARN-2190.4.patch, 
 YARN-2190.5.patch, YARN-2190.6.patch, YARN-2190.7.patch, YARN-2190.8.patch, 
 YARN-2190.9.patch


 Yarn default container executor on Windows does not set the resource limit on 
 the containers currently. The memory limit is enforced by a separate 
 monitoring thread. The container implementation on Windows uses Job Object 
 right now. The latest Windows (8 or later) API allows CPU and memory limits 
 on the job objects. We want to add the new options to the executor that can 
 set the limits on job objects thus provides resource enforcement at OS level.
 http://msdn.microsoft.com/en-us/library/windows/desktop/ms686216(v=vs.85).aspx



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3227) Timeline renew delegation token fails when RM user's TGT is expired

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351526#comment-14351526
 ] 

Hudson commented on YARN-3227:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #859 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/859/])
YARN-3227. Timeline renew delegation token fails when RM user's TGT is (xgong: 
rev d1abc5d4fc00bb1b226066684556ba16ace71744)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java


 Timeline renew delegation token fails when RM user's TGT is expired
 ---

 Key: YARN-3227
 URL: https://issues.apache.org/jira/browse/YARN-3227
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jonathan Eagles
Assignee: Zhijie Shen
Priority: Critical
 Fix For: 2.7.0

 Attachments: YARN-3227.1.patch, YARN-3227.test.patch


 When the RM user's kerberos TGT is expired, the RM renew delegation token 
 operation fails as part of job submission. Expected behavior is that RM will 
 relogin to get a new TGT.
 {quote}
 2015-02-06 18:54:05,617 [DelegationTokenRenewer #25954] WARN
 security.DelegationTokenRenewer: Unable to add the application to the
 delegation token renewer.
 java.io.IOException: Failed to renew token: Kind: TIMELINE_DELEGATION_TOKEN,
 Service: timelineserver.example.com:4080, Ident: (owner=user,
 renewer=rmuser, realUser=oozie, issueDate=1423248845528,
 maxDate=1423853645528, sequenceNumber=9716, masterKeyId=9)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:443)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:77)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:808)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:789)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.io.IOException: HTTP status [401], message [Unauthorized]
 at
 org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:286)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.renewDelegationToken(DelegationTokenAuthenticator.java:211)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.renewDelegationToken(DelegationTokenAuthenticatedURL.java:414)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:374)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:360)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$4.run(TimelineClientImpl.java:429)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:161)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.operateDelegationToken(TimelineClientImpl.java:444)
 at
 org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.renewDelegationToken(TimelineClientImpl.java:378)
 at
 org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier$Renewer.renew(TimelineDelegationTokenIdentifier.java:81)
 at org.apache.hadoop.security.token.Token.renew(Token.java:377)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:532)
 at
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:529)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3275) CapacityScheduler: Preemption happening on non-preemptable queues

2015-03-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351527#comment-14351527
 ] 

Hudson commented on YARN-3275:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #859 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/859/])
YARN-3275. CapacityScheduler: Preemption happening on non-preemptable queues. 
Contributed by Eric Payne (jlowe: rev 27e8ea820fab8dce59f4db9814e73bd60c1d4ef1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* hadoop-yarn-project/CHANGES.txt


 CapacityScheduler: Preemption happening on non-preemptable queues
 -

 Key: YARN-3275
 URL: https://issues.apache.org/jira/browse/YARN-3275
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Eric Payne
Assignee: Eric Payne
  Labels: capacity-scheduler
 Fix For: 2.7.0

 Attachments: YARN-3275.v1.txt, YARN-3275.v2.txt


 YARN-2056 introduced the ability to turn preemption on and off at the queue 
 level. In cases where a queue goes over its absolute max capacity (YARN-3243, 
 for example), containers can be preempted from that queue, even though the 
 queue is marked as non-preemptable.
 We are using this feature in large, busy clusters and seeing this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1621) Add CLI to list rows of task attempt ID, container ID, host of container, state of container

2015-03-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bartosz Ługowski updated YARN-1621:
---
Attachment: YARN-1621.5.patch

 Add CLI to list rows of task attempt ID, container ID, host of container, 
 state of container
 --

 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
Assignee: Bartosz Ługowski
 Attachments: YARN-1621.1.patch, YARN-1621.2.patch, YARN-1621.3.patch, 
 YARN-1621.4.patch, YARN-1621.5.patch


 As more applications are moved to YARN, we need generic CLI to list rows of 
 task attempt ID, container ID, host of container, state of container. Today 
 if YARN application running in a container does hang, there is no way to find 
 out more info because a user does not know where each attempt is running in.
 For each running application, it is useful to differentiate between 
 running/succeeded/failed/killed containers.
  
 {code:title=proposed yarn cli}
 $ yarn application -list-containers -applicationId appId [-containerState 
 state of container]
 where containerState is optional filter to list container in given state only.
 container state can be running/succeeded/killed/failed/all.
 A user can specify more than one container state at once e.g. KILLED,FAILED.
 task attempt ID container ID host of container state of container 
 {code}
 CLI should work with running application/completed application. If a 
 container runs many task attempts, all attempts should be shown. That will 
 likely be the case of Tez container-reuse application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1621) Add CLI to list rows of task attempt ID, container ID, host of container, state of container

2015-03-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bartosz Ługowski updated YARN-1621:
---
Attachment: (was: YARN-1621.5.patch)

 Add CLI to list rows of task attempt ID, container ID, host of container, 
 state of container
 --

 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
Assignee: Bartosz Ługowski
 Attachments: YARN-1621.1.patch, YARN-1621.2.patch, YARN-1621.3.patch, 
 YARN-1621.4.patch


 As more applications are moved to YARN, we need generic CLI to list rows of 
 task attempt ID, container ID, host of container, state of container. Today 
 if YARN application running in a container does hang, there is no way to find 
 out more info because a user does not know where each attempt is running in.
 For each running application, it is useful to differentiate between 
 running/succeeded/failed/killed containers.
  
 {code:title=proposed yarn cli}
 $ yarn application -list-containers -applicationId appId [-containerState 
 state of container]
 where containerState is optional filter to list container in given state only.
 container state can be running/succeeded/killed/failed/all.
 A user can specify more than one container state at once e.g. KILLED,FAILED.
 task attempt ID container ID host of container state of container 
 {code}
 CLI should work with running application/completed application. If a 
 container runs many task attempts, all attempts should be shown. That will 
 likely be the case of Tez container-reuse application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3243) CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits.

2015-03-07 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-3243:
--
Assignee: Wangda Tan  (was: Jian He)

 CapacityScheduler should pass headroom from parent to children to make sure 
 ParentQueue obey its capacity limits.
 -

 Key: YARN-3243
 URL: https://issues.apache.org/jira/browse/YARN-3243
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: YARN-3243.1.patch


 Now CapacityScheduler has some issues to make sure ParentQueue always obeys 
 its capacity limits, for example:
 1) When allocating container of a parent queue, it will only check 
 parentQueue.usage  parentQueue.max. If leaf queue allocated a container.size 
  (parentQueue.max - parentQueue.usage), parent queue can excess its max 
 resource limit, as following example:
 {code}
 A  (usage=54, max=55)
/ \
   A1 A2 (usage=1, max=55)
 (usage=53, max=53)
 {code}
 Queue-A2 is able to allocate container since its usage  max, but if we do 
 that, A's usage can excess A.max.
 2) When doing continous reservation check, parent queue will only tell 
 children you need unreserve *some* resource, so that I will less than my 
 maximum resource, but it will not tell how many resource need to be 
 unreserved. This may lead to parent queue excesses configured maximum 
 capacity as well.
 With YARN-3099/YARN-3124, now we have {{ResourceUsage}} class in each class, 
 *here is my proposal*:
 - ParentQueue will set its children's ResourceUsage.headroom, which means, 
 *maximum resource its children can allocate*.
 - ParentQueue will set its children's headroom to be (saying parent's name is 
 qA): min(qA.headroom, qA.max - qA.used). This will make sure qA's 
 ancestors' capacity will be enforced as well (qA.headroom is set by qA's 
 parent).
 - {{needToUnReserve}} is not necessary, instead, children can get how much 
 resource need to be unreserved to keep its parent's resource limit.
 - More over, with this, YARN-3026 will make a clear boundary between 
 LeafQueue and FiCaSchedulerApp, headroom will consider user-limit, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3296) yarn.nodemanager.container-monitor.process-tree.class is configurable but ResourceCalculatorProcessTree class is marked Private

2015-03-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351817#comment-14351817
 ] 

Hadoop QA commented on YARN-3296:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12703261/YARN-3296.2.patch
  against trunk revision 608ebd5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6889//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6889//console

This message is automatically generated.

 yarn.nodemanager.container-monitor.process-tree.class is configurable but 
 ResourceCalculatorProcessTree class is marked Private
 ---

 Key: YARN-3296
 URL: https://issues.apache.org/jira/browse/YARN-3296
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-3296.1.patch, YARN-3296.2.patch


 Given that someone can implement their custom plugin for resource monitoring 
 and configure the NM to use it, this class should be marked public.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-2133) Make entity Id specification in TestTimelineWebServices amenable for future test cases

2015-03-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved YARN-2133.
--
Resolution: Later

 Make entity Id specification in TestTimelineWebServices amenable for future 
 test cases
 --

 Key: YARN-2133
 URL: https://issues.apache.org/jira/browse/YARN-2133
 Project: Hadoop YARN
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor

 Currently each test case in TestTimelineWebServices uses different entity Ids 
 / types.
 When new test case is added, developer has to go over existing cases and find 
 an unused entity Id.
 Specification of unique entity Id can be done through introduction of an 
 AtomicInteger field of TestTimelineWebServices that is incremented at the 
 beginning of each test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3296) yarn.nodemanager.container-monitor.process-tree.class is configurable but ResourceCalculatorProcessTree class is marked Private

2015-03-07 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-3296:
--
Attachment: YARN-3296.2.patch

Made some functions non abstract. 

There is some inconsistency in return values though. Older functions were 
expected to return 0 but cpuPercent returns -1 for cases where functionality is 
not available. 

 yarn.nodemanager.container-monitor.process-tree.class is configurable but 
 ResourceCalculatorProcessTree class is marked Private
 ---

 Key: YARN-3296
 URL: https://issues.apache.org/jira/browse/YARN-3296
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-3296.1.patch, YARN-3296.2.patch


 Given that someone can implement their custom plugin for resource monitoring 
 and configure the NM to use it, this class should be marked public.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3303) Expose UserInfo in RMWebService

2015-03-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned YARN-3303:
--

Assignee: Varun Saxena  (was: Brahma Reddy Battula)

 Expose UserInfo in RMWebService
 ---

 Key: YARN-3303
 URL: https://issues.apache.org/jira/browse/YARN-3303
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jian He
Assignee: Varun Saxena

 We already have the UserInfo class.   It's useful to expose that on the 
 RMWebService too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3303) Expose UserInfo in RMWebService

2015-03-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned YARN-3303:
--

Assignee: Brahma Reddy Battula  (was: Varun Saxena)

 Expose UserInfo in RMWebService
 ---

 Key: YARN-3303
 URL: https://issues.apache.org/jira/browse/YARN-3303
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jian He
Assignee: Brahma Reddy Battula

 We already have the UserInfo class.   It's useful to expose that on the 
 RMWebService too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3303) Expose UserInfo in RMWebService

2015-03-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned YARN-3303:
--

Assignee: Varun Saxena  (was: Brahma Reddy Battula)

 Expose UserInfo in RMWebService
 ---

 Key: YARN-3303
 URL: https://issues.apache.org/jira/browse/YARN-3303
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jian He
Assignee: Varun Saxena

 We already have the UserInfo class.   It's useful to expose that on the 
 RMWebService too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3303) Expose UserInfo in RMWebService

2015-03-07 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned YARN-3303:
--

Assignee: Brahma Reddy Battula  (was: Varun Saxena)

 Expose UserInfo in RMWebService
 ---

 Key: YARN-3303
 URL: https://issues.apache.org/jira/browse/YARN-3303
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jian He
Assignee: Brahma Reddy Battula

 We already have the UserInfo class.   It's useful to expose that on the 
 RMWebService too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3296) yarn.nodemanager.container-monitor.process-tree.class is configurable but ResourceCalculatorProcessTree class is marked Private

2015-03-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351856#comment-14351856
 ] 

Junping Du commented on YARN-3296:
--

Thanks [~hitesh] for updating the patch!
bq. There is some inconsistency in return values though. Older functions were 
expected to return 0 but cpuPercent returns -1 for cases where functionality is 
not available. 
I think inconsistent for default value here will make user get confused. 
However, I think this shouldn't be in scope of this JIRA and I guess we should 
discuss in a separated JIRA that if we want to change the value (also a 
headache to existing user). 
v2 patch looks good to me. +1. Will commit it later if no further comments from 
others.

 yarn.nodemanager.container-monitor.process-tree.class is configurable but 
 ResourceCalculatorProcessTree class is marked Private
 ---

 Key: YARN-3296
 URL: https://issues.apache.org/jira/browse/YARN-3296
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: YARN-3296.1.patch, YARN-3296.2.patch


 Given that someone can implement their custom plugin for resource monitoring 
 and configure the NM to use it, this class should be marked public.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)