[jira] [Comment Edited] (YARN-6728) Job will run slow when the performance of defaultFs degrades and the log-aggregation is enable.

2017-06-23 Thread zhengchenyu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16059056#comment-16059056
 ] 

zhengchenyu edited comment on YARN-6728 at 6/23/17 7:00 AM:


Our partner [~maobaolong] advice to remove verifyAndCreateRemoteLogDir and 
mkdir to daemon thread. By this way,  container will not  be stuck by defaultFs 
before container runs. In our test, we found the speed of running containers 
are significantly improved.


was (Author: zhengchenyu):
Our partner [~maobaolong] advice to remove verifyAndCreateRemoteLogDir and 
mkdir to daemon thread. By this way container will not  be stuck by defaultFs

> Job will run slow when the performance of defaultFs degrades and the 
> log-aggregation is enable. 
> 
>
> Key: YARN-6728
> URL: https://issues.apache.org/jira/browse/YARN-6728
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, yarn
>Affects Versions: 2.7.1
> Environment: CentOS 7.1 hadoop-2.7.1
>Reporter: zhengchenyu
> Fix For: 2.9.0, 2.7.4
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> In our cluster, I found many map keep "NEW" state  for several minutes. Here 
> I got the container log: 
> {code}
> [2017-06-13T18:21:23.068+08:00] [INFO] 
> containermanager.application.ApplicationImpl.transition(ApplicationImpl.java 
> 304) [AsyncDispatcher event handler] : Adding 
> container_1495632926847_2459604_01_11 to application 
> application_1495632926847_2459604
> [2017-06-13T18:23:08.715+08:00] [INFO] 
> containermanager.container.ContainerImpl.handle(ContainerImpl.java 1137) 
> [AsyncDispatcher event handler] : Container 
> container_1495632926847_2459604_01_11 transitioned from NEW to LOCALIZING
> {code}
> Then I search the log from 18:21:23.068 to 18:23:08.715. I found some 
> dispatch of  AsyncDispather run slow, because they visit the defaultFs. Our 
> cluster increase to 4k node, the pressure of defaultFs increase.  (Note: 
> log-aggregation is enable. )
> Container runs in nodemanager will invoke initApp(), then invoke 
> verifyAndCreateRemoteLogDir and mkdir remote log, these operation will visit 
> the defaultFs. So the container will be stuck here. Then application will run 
> slow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-06-23 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16060485#comment-16060485
 ] 

Bibin A Chundatt commented on YARN-6708:


cc:/ [~jlowe].

> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: YARN-6708
> URL: https://issues.apache.org/jira/browse/YARN-6708
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6708.001.patch, YARN-6708.002.patch, 
> YARN-6708.003.patch
>
>
> Configure umask as *027* for nodemanager service user
> and {{yarn.nodemanager.local-cache.max-files-per-directory}} as {{40}}. After 
> 4  *private* dir localization next directory will be *0/14*
> Local Directory cache manager 
> {code}
> vm2:/opt/hadoop/release/data/nmlocal/usercache/mapred/filecache # l
> total 28
> drwx--x--- 7 mapred hadoop 4096 Jun 10 14:35 ./
> drwxr-s--- 4 mapred hadoop 4096 Jun 10 12:07 ../
> drwxr-x--- 3 mapred users  4096 Jun 10 14:36 0/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:15 10/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:22 11/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:27 12/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:31 13/
> {code}
> *drwxr-x---* 3 mapred users  4096 Jun 10 14:36 0/ is only *750*
> Nodemanager user will not be able check for localization path exists or not.
> {{LocalResourcesTrackerImpl}}
> {code}
> case REQUEST:
>   if (rsrc != null && (!isResourcePresent(rsrc))) {
> LOG.info("Resource " + rsrc.getLocalPath()
> + " is missing, localizing it again");
> removeResource(req);
> rsrc = null;
>   }
>   if (null == rsrc) {
> rsrc = new LocalizedResource(req, dispatcher);
> localrsrc.put(req, rsrc);
>   }
>   break;
> {code}
> *isResourcePresent* will always return false and same resource will be 
> localized to {{0}} to next unique number



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16060493#comment-16060493
 ] 

Hudson commented on YARN-5892:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11914 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11914/])
YARN-5892. Support user-specific minimum user limit percentage in (sunilg: rev 
ca13b224b2feb9c44de861da9cbba8dd2a12cb35)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UserInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java


> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6714) RM crashed with IllegalStateException while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2017-06-23 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6714:
---
Attachment: YARN-6714.003.patch

Update the patch with adding comments for sanity check of attemptId. Thanks 
[~sunilg] for your suggestion.

> RM crashed with IllegalStateException while handling APP_ATTEMPT_REMOVED 
> event when async-scheduling enabled in CapacityScheduler
> -
>
> Key: YARN-6714
> URL: https://issues.apache.org/jira/browse/YARN-6714
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6714.001.patch, YARN-6714.002.patch, 
> YARN-6714.003.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, after AM failover 
> and unreserve all reserved containers, it still have chance to get and commit 
> the outdated reserve proposal of the failed app attempt. This problem 
> happened on an app in our cluster, when this app stopped, it unreserved all 
> reserved containers and compared these appAttemptId with current 
> appAttemptId, if not match it will throw IllegalStateException and make RM 
> crashed.
> Error log:
> {noformat}
> 2017-06-08 11:02:24,339 FATAL [ResourceManager Event Processor] 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Trying to unreserve  for application 
> appattempt_1495188831758_0121_02 when currently reserved  for application 
> application_1495188831758_0121 on node host: node1:45454 #containers=2 
> available=... used=...
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode.unreserveResource(FiCaSchedulerNode.java:123)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.unreserve(FiCaSchedulerApp.java:845)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1787)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1957)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:586)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:966)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1740)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:152)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:822)
> at java.lang.Thread.run(Thread.java:834)
> {noformat}
> When async-scheduling enabled, CapacityScheduler#doneApplicationAttempt and 
> CapacityScheduler#tryCommit both need to get write_lock before executing, so 
> we can check the app attempt state in commit process to avoid committing 
> outdated proposals.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6737) Rename getApplicationAttempt to getCurrentAttempt in AbstractYarnScheduler/CapacityScheduler

2017-06-23 Thread Tao Yang (JIRA)
Tao Yang created YARN-6737:
--

 Summary: Rename getApplicationAttempt to getCurrentAttempt in 
AbstractYarnScheduler/CapacityScheduler
 Key: YARN-6737
 URL: https://issues.apache.org/jira/browse/YARN-6737
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha3, 2.9.0
Reporter: Tao Yang
Priority: Minor


As discussed in YARN-6714 
(https://issues.apache.org/jira/browse/YARN-6714?focusedCommentId=16052158&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16052158)
AbstractYarnScheduler#getApplicationAttempt is inconsistent to its name, it 
discarded application_attempt_id and always return the latest attempt. We 
should: 1) Rename it to getCurrentAttempt, 2) Change parameter from attemptId 
to applicationId. 3) Took a scan of all usages to see if any similar issue 
could happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6728) Job will run slow when the performance of defaultFs degrades and the log-aggregation is enable.

2017-06-23 Thread zhengchenyu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated YARN-6728:
--
Attachment: YARN-6728.patch.00_branch-2.7

> Job will run slow when the performance of defaultFs degrades and the 
> log-aggregation is enable. 
> 
>
> Key: YARN-6728
> URL: https://issues.apache.org/jira/browse/YARN-6728
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, yarn
>Affects Versions: 2.7.1
> Environment: CentOS 7.1 hadoop-2.7.1
>Reporter: zhengchenyu
> Fix For: 2.9.0, 2.7.4
>
> Attachments: YARN-6728.patch.00_branch-2.7
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> In our cluster, I found many map keep "NEW" state  for several minutes. Here 
> I got the container log: 
> {code}
> [2017-06-13T18:21:23.068+08:00] [INFO] 
> containermanager.application.ApplicationImpl.transition(ApplicationImpl.java 
> 304) [AsyncDispatcher event handler] : Adding 
> container_1495632926847_2459604_01_11 to application 
> application_1495632926847_2459604
> [2017-06-13T18:23:08.715+08:00] [INFO] 
> containermanager.container.ContainerImpl.handle(ContainerImpl.java 1137) 
> [AsyncDispatcher event handler] : Container 
> container_1495632926847_2459604_01_11 transitioned from NEW to LOCALIZING
> {code}
> Then I search the log from 18:21:23.068 to 18:23:08.715. I found some 
> dispatch of  AsyncDispather run slow, because they visit the defaultFs. Our 
> cluster increase to 4k node, the pressure of defaultFs increase.  (Note: 
> log-aggregation is enable. )
> Container runs in nodemanager will invoke initApp(), then invoke 
> verifyAndCreateRemoteLogDir and mkdir remote log, these operation will visit 
> the defaultFs. So the container will be stuck here. Then application will run 
> slow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-06-23 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5006:
---
Attachment: YARN-5006-branch-2.005.patch

Attaching branch-2 patch

> ResourceManager quit due to ApplicationStateData exceed the limit  size of 
> znode in zk
> --
>
> Key: YARN-5006
> URL: https://issues.apache.org/jira/browse/YARN-5006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0, 2.7.2
>Reporter: dongtingting
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5006.001.patch, YARN-5006.002.patch, 
> YARN-5006.003.patch, YARN-5006.004.patch, YARN-5006.005.patch, 
> YARN-5006-branch-2.005.patch
>
>
> Client submit a job, this job add 1 file into DistributedCache. when the 
> job is submitted, ResourceManager sotre ApplicationStateData into zk. 
> ApplicationStateData  is exceed the limit size of znode. RM exit 1.   
> The related code in RMStateStore.java :
> {code}
>   private static class StoreAppTransition
>   implements SingleArcTransition {
> @Override
> public void transition(RMStateStore store, RMStateStoreEvent event) {
>   if (!(event instanceof RMStateStoreAppEvent)) {
> // should never happen
> LOG.error("Illegal event type: " + event.getClass());
> return;
>   }
>   ApplicationState appState = ((RMStateStoreAppEvent) 
> event).getAppState();
>   ApplicationId appId = appState.getAppId();
>   ApplicationStateData appStateData = ApplicationStateData
>   .newInstance(appState);
>   LOG.info("Storing info for app: " + appId);
>   try {  
> store.storeApplicationStateInternal(appId, appStateData);  //store 
> the appStateData
> store.notifyApplication(new RMAppEvent(appId,
>RMAppEventType.APP_NEW_SAVED));
>   } catch (Exception e) {
> LOG.error("Error storing app: " + appId, e);
> store.notifyStoreOperationFailed(e);   //handle fail event, system 
> exit 
>   }
> };
>   }
> {code}
> The Exception log:
> {code}
>  ...
> 2016-04-20 11:26:35,732 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore 
> AsyncDispatcher event handler: Maxed out ZK retries. Giving up!
> 2016-04-20 11:26:35,732 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore 
> AsyncDispatcher event handler: Error storing app: 
> application_1461061795989_17671
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:936)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1075)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1096)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:947)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:956)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:626)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:138)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:123)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806)
> at 
> org.apache.hadoop.yarn.server.re

[jira] [Comment Edited] (YARN-6727) Improve getQueueUserAcls API to query for specific queue and user

2017-06-23 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16058945#comment-16058945
 ] 

Naganarasimha G R edited comment on YARN-6727 at 6/23/17 12:22 PM:
---

Thanks [~bibinchundatt] for raising this issue. Agree that it can be simplified 
in the server side if we modify it support for the qiven user and the queue. if 
any thing is not provided we can map it to defaults, (like user -> current UGI 
user & Queue -> all queues)


was (Author: naganarasimha):
Thanks [~bibinchundatt] for raising this issue. Agree that it can be simplified 
in the server side it gets simplified.

> Improve getQueueUserAcls API to query for  specific queue and user
> --
>
> Key: YARN-6727
> URL: https://issues.apache.org/jira/browse/YARN-6727
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> Currently {{ApplicationClientProtocol#getQueueUserAcls}} return data for all 
> the queues available in scheduler for user.
> User wants to know whether he has rights of a particular queue only. For 
> systems with 5K queues returning all queues list is not efficient.
> Suggested change: support additional parameters *userName and queueName* as 
> optional. Admin user should be able to query other users ACL for a particular 
> queueName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6738) LevelDBCacheTimelineStore should reuse ObjectMapper instances

2017-06-23 Thread Zoltan Haindrich (JIRA)
Zoltan Haindrich created YARN-6738:
--

 Summary: LevelDBCacheTimelineStore should reuse ObjectMapper 
instances
 Key: YARN-6738
 URL: https://issues.apache.org/jira/browse/YARN-6738
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: timelineserver
Reporter: Zoltan Haindrich


Using TezUI sometimes times out...and the cause of it was that the query was 
quite large; and the leveldb handler seems like recreates the {{ObjectMapper}} 
for every read - this is unfortunate; since the ObjectMapper have to rescan the 
class annotations which may take some time.

Keeping the objectmapper reduces the ATS call time from 17 seconds to 3 seconds 
for me...which was enough to get my tez-ui working again :)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6738) LevelDBCacheTimelineStore should reuse ObjectMapper instances

2017-06-23 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-6738:
---
Attachment: Screen Shot 2017-06-23 at 2.43.06 PM.png

> LevelDBCacheTimelineStore should reuse ObjectMapper instances
> -
>
> Key: YARN-6738
> URL: https://issues.apache.org/jira/browse/YARN-6738
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Zoltan Haindrich
> Attachments: Screen Shot 2017-06-23 at 2.43.06 PM.png
>
>
> Using TezUI sometimes times out...and the cause of it was that the query was 
> quite large; and the leveldb handler seems like recreates the 
> {{ObjectMapper}} for every read - this is unfortunate; since the ObjectMapper 
> have to rescan the class annotations which may take some time.
> Keeping the objectmapper reduces the ATS call time from 17 seconds to 3 
> seconds for me...which was enough to get my tez-ui working again :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6738) LevelDBCacheTimelineStore should reuse ObjectMapper instances

2017-06-23 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-6738:
---
Attachment: YARN-6738.1.patch

> LevelDBCacheTimelineStore should reuse ObjectMapper instances
> -
>
> Key: YARN-6738
> URL: https://issues.apache.org/jira/browse/YARN-6738
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Zoltan Haindrich
> Attachments: Screen Shot 2017-06-23 at 2.43.06 PM.png, 
> YARN-6738.1.patch
>
>
> Using TezUI sometimes times out...and the cause of it was that the query was 
> quite large; and the leveldb handler seems like recreates the 
> {{ObjectMapper}} for every read - this is unfortunate; since the ObjectMapper 
> have to rescan the class annotations which may take some time.
> Keeping the objectmapper reduces the ATS call time from 17 seconds to 3 
> seconds for me...which was enough to get my tez-ui working again :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6738) LevelDBCacheTimelineStore should reuse ObjectMapper instances

2017-06-23 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-6738:
---
Attachment: (was: YARN-6738.1.patch)

> LevelDBCacheTimelineStore should reuse ObjectMapper instances
> -
>
> Key: YARN-6738
> URL: https://issues.apache.org/jira/browse/YARN-6738
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Zoltan Haindrich
> Attachments: Screen Shot 2017-06-23 at 2.43.06 PM.png
>
>
> Using TezUI sometimes times out...and the cause of it was that the query was 
> quite large; and the leveldb handler seems like recreates the 
> {{ObjectMapper}} for every read - this is unfortunate; since the ObjectMapper 
> have to rescan the class annotations which may take some time.
> Keeping the objectmapper reduces the ATS call time from 17 seconds to 3 
> seconds for me...which was enough to get my tez-ui working again :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6738) LevelDBCacheTimelineStore should reuse ObjectMapper instances

2017-06-23 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-6738:
---
Attachment: YARN-6738.1.patch

> LevelDBCacheTimelineStore should reuse ObjectMapper instances
> -
>
> Key: YARN-6738
> URL: https://issues.apache.org/jira/browse/YARN-6738
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Zoltan Haindrich
> Attachments: Screen Shot 2017-06-23 at 2.43.06 PM.png, 
> YARN-6738.1.patch
>
>
> Using TezUI sometimes times out...and the cause of it was that the query was 
> quite large; and the leveldb handler seems like recreates the 
> {{ObjectMapper}} for every read - this is unfortunate; since the ObjectMapper 
> have to rescan the class annotations which may take some time.
> Keeping the objectmapper reduces the ATS call time from 17 seconds to 3 
> seconds for me...which was enough to get my tez-ui working again :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6738) LevelDBCacheTimelineStore should reuse ObjectMapper instances

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16060956#comment-16060956
 ] 

Hadoop QA commented on YARN-6738:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  8s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage:
 The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6738 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874262/YARN-6738.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fc6b045d31fc 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e5db9af |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16228/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16228/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16228/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LevelDBCacheTimelineStore should reuse ObjectMapper instances
> -
>
> Key: YARN-6738
> URL: https://issues.apache.org/jira/browse/YARN-6738
> Project:

[jira] [Commented] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16060958#comment-16060958
 ] 

Hadoop QA commented on YARN-5006:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2 failed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2 failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 386 unchanged - 2 fixed = 387 total (was 388) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 5 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
23s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.

[jira] [Updated] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-06-23 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-2919:

Attachment: YARN-2919.004.patch

{{TestDelegationTokenRenewer.testCancelWithMultipleAppSubmissions}} is not 
related to this patch same was found in other jira's too. Have corrected the 
checkstyle issues in the current patch and other test case failures are not 
related to the patch. Findbugs issue too not related to the patch (will raise 
new issues for these shortly).

> Potential race between renew and cancel in DelegationTokenRenwer 
> -
>
> Key: YARN-2919
> URL: https://issues.apache.org/jira/browse/YARN-2919
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2919.002.patch, YARN-2919.003.patch, 
> YARN-2919.004.patch, YARN-2919.20141209-1.patch
>
>
> YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a 
> race because of which a renewal in flight isn't interrupted by a cancel. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6716) Native services support for specifying component start order

2017-06-23 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-6716:
-
Attachment: YARN-6716-yarn-native-services.004.patch

Okay, I added some code to sort components by their dependency depth, so that 
components that have dependencies will be assigned priorities after other 
components.

> Native services support for specifying component start order
> 
>
> Key: YARN-6716
> URL: https://issues.apache.org/jira/browse/YARN-6716
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-6716-yarn-native-services.001.patch, 
> YARN-6716-yarn-native-services.002.patch, 
> YARN-6716-yarn-native-services.003.patch, 
> YARN-6716-yarn-native-services.004.patch
>
>
> Some native services apps have components that should be started after other 
> components. The readiness_check and dependencies features of the native 
> services API are currently unimplemented, and we could use these to implement 
> a basic start order feature. When component B has a dependency on component 
> A, the AM could delay making a container request for component B until 
> component A's readiness check has passed (for all instances of component A).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6728) Job will run slow when the performance of defaultFs degrades and the log-aggregation is enable.

2017-06-23 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061162#comment-16061162
 ] 

Haibo Chen commented on YARN-6728:
--

Seems a duplicate of YARN-6396

> Job will run slow when the performance of defaultFs degrades and the 
> log-aggregation is enable. 
> 
>
> Key: YARN-6728
> URL: https://issues.apache.org/jira/browse/YARN-6728
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, yarn
>Affects Versions: 2.7.1
> Environment: CentOS 7.1 hadoop-2.7.1
>Reporter: zhengchenyu
> Fix For: 2.9.0, 2.7.4
>
> Attachments: YARN-6728.patch.00_branch-2.7
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> In our cluster, I found many map keep "NEW" state  for several minutes. Here 
> I got the container log: 
> {code}
> [2017-06-13T18:21:23.068+08:00] [INFO] 
> containermanager.application.ApplicationImpl.transition(ApplicationImpl.java 
> 304) [AsyncDispatcher event handler] : Adding 
> container_1495632926847_2459604_01_11 to application 
> application_1495632926847_2459604
> [2017-06-13T18:23:08.715+08:00] [INFO] 
> containermanager.container.ContainerImpl.handle(ContainerImpl.java 1137) 
> [AsyncDispatcher event handler] : Container 
> container_1495632926847_2459604_01_11 transitioned from NEW to LOCALIZING
> {code}
> Then I search the log from 18:21:23.068 to 18:23:08.715. I found some 
> dispatch of  AsyncDispather run slow, because they visit the defaultFs. Our 
> cluster increase to 4k node, the pressure of defaultFs increase.  (Note: 
> log-aggregation is enable. )
> Container runs in nodemanager will invoke initApp(), then invoke 
> verifyAndCreateRemoteLogDir and mkdir remote log, these operation will visit 
> the defaultFs. So the container will be stuck here. Then application will run 
> slow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6127) Add support for work preserving NM restart when AMRMProxy is enabled

2017-06-23 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061173#comment-16061173
 ] 

Botong Huang commented on YARN-6127:


Thanks [~asuresh] and [~subru] for the review! 

> Add support for work preserving NM restart when AMRMProxy is enabled
> 
>
> Key: YARN-6127
> URL: https://issues.apache.org/jira/browse/YARN-6127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, nodemanager
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6127-branch-2.v1.patch, YARN-6127.v1.patch, 
> YARN-6127.v2.patch, YARN-6127.v3.patch, YARN-6127.v4.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. In a Federated YARN environment, there's additional state in the 
> {{AMRMProxy}} to allow for spanning across multiple sub-clusters, so we need 
> to enhance {{AMRMProxy}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-06-23 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061175#comment-16061175
 ] 

Haibo Chen commented on YARN-6733:
--

I have a few questions on the Tez use case. 
1) Is the sub_application going to have similar schema as ApplicationTable? 
2) Are nodes in a Tez DAG all YARN applications? If so, can't Tez write to 
ApplicationTable with the doAs user?

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6716) Native services support for specifying component start order

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061181#comment-16061181
 ] 

Hadoop QA commented on YARN-6716:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 8s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: The patch generated 5 
new + 368 unchanged - 188 fixed = 373 total (was 556) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
48s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6716 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874275/YARN-6716-yarn-native-services.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 62d0cfe5fe86 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 34aeea7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16230/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16230/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications |
| Console output | 
https://builds.ap

[jira] [Updated] (YARN-6668) Use cgroup to get container resource utilization

2017-06-23 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6668:
-
Attachment: YARN-6668.000.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6673) Add cpu cgroup configurations for opportunistic containers

2017-06-23 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6673:
-
Attachment: YARN-6673.000.patch

> Add cpu cgroup configurations for opportunistic containers
> --
>
> Key: YARN-6673
> URL: https://issues.apache.org/jira/browse/YARN-6673
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6673.000.patch
>
>
> In addition to setting cpu.cfs_period_us on a per-container basis, we could 
> also set cpu.shares to 2 for opportunistic containers so they are run on a 
> best-effort basis



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6674) Add memory cgroup settings for opportunistic containers

2017-06-23 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6674:
-
Attachment: YARN-6674.000.patch

> Add memory cgroup settings for opportunistic containers
> ---
>
> Key: YARN-6674
> URL: https://issues.apache.org/jira/browse/YARN-6674
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6674.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6150) TestContainerManagerSecurity tests for Yarn Server are flakey

2017-06-23 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061222#comment-16061222
 ] 

Ray Chiang commented on YARN-6150:
--

Thanks for digging into the issue [~ajisakaa].  I have a slight preference for 
the first solution, since that would make the test a bit more robust.

> TestContainerManagerSecurity tests for Yarn Server are flakey
> -
>
> Key: YARN-6150
> URL: https://issues.apache.org/jira/browse/YARN-6150
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Sturman
>Assignee: Daniel Sturman
> Attachments: YARN-6150.001.patch, YARN-6150.002.patch, 
> YARN-6150.003.patch, YARN-6150.004.patch, YARN-6150.005.patch, 
> YARN-6150.006.patch
>
>
> Repeated runs of 
> {{org.apache.hadoop.yarn.server.TestContainerManagedSecurity}} can either 
> pass or fail on repeated runs on the same codebase.  Also, the two runs (one 
> in secure mode, one without security) aren't well labeled in JUnit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6674) Add memory cgroup settings for opportunistic containers

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061259#comment-16061259
 ] 

Hadoop QA commented on YARN-6674:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
38s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} YARN-1011 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-1011 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 0 unchanged - 9 fixed = 0 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6674 |
| GITHUB PR | https://github.com/apache/hadoop/pull/240 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 29b3aa02f170 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-1011 / 153498b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16232/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16232/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16232/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add memory cgroup settings for opportunistic containers
> ---
>
> Key: YARN-6674
> URL: https://issues.apache.org/jira/browse/YARN-6674
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions:

[jira] [Commented] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061265#comment-16061265
 ] 

Hadoop QA commented on YARN-2919:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 56s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-2919 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874268/YARN-2919.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 12ce2c648314 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / abdea26 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16229/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16229/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16229/testReport

[jira] [Commented] (YARN-6673) Add cpu cgroup configurations for opportunistic containers

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061284#comment-16061284
 ] 

Hadoop QA commented on YARN-6673:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 4s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} YARN-1011 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-1011 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6673 |
| GITHUB PR | https://github.com/apache/hadoop/pull/232 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 40fbe21f34dd 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-1011 / 153498b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16233/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16233/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16233/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add cpu cgroup configurations for opportunistic containers
> --
>
> Key: YARN-6673
> URL: https://issues.apache.org/jira/browse/YARN-6673
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6673.000.patch
>
>
> In addition to setting cpu.cfs_period_us on a per-container bas

[jira] [Updated] (YARN-3895) Support ACLs in ATSv2

2017-06-23 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-3895:

Summary: Support ACLs in ATSv2  (was: Support ACLs in TimelineReader)

> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3895) Support ACLs in ATSv2

2017-06-23 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-3895:

Description: This JIRA is to keep track of authorization support design 
discussions for both readers and collectors. 

> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>
> This JIRA is to keep track of authorization support design discussions for 
> both readers and collectors. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-06-23 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061301#comment-16061301
 ] 

Vrushali C commented on YARN-6733:
--

Hi [~haibo.chen],
Good questions. Let me try to answer them and give more context.

bq. 1) Is the sub_application going to have similar schema as ApplicationTable? 

Not quite. It will have column families & columns similar to the application 
and entity tables. But the row key we, were discussing.  We were thinking of 
two possibilities:
1) cluster ! subapp_user ! subapp ! entity type ! id prefix ! entity id
OR
2) cluster  ! entity type ! subapp_user ! subappid ! prefix ! entity id

The first kind of row key is for helping serve different types of entities for 
specific users. Like give me all DAGs for this user. 
The second kind row key is for helping the landing pages of UIs like Tez, which 
want all DAGs for all sub-app users.

I have been thinking over this. I think the basic landing should not be 
answered by this table. Just like flow activity helps answer landing pages for 
flows, there could be another table for landing pages. The reason is, I 
anticipate large number of queries run by users in these Tez sessions. Then the 
table will be pretty big and in no way can we serve latest run dags easily.  
The landing page for all sub-app users would come from somewhere else.

bq. 2) Are nodes in a Tez DAG all YARN applications? If so, can't Tez write to 
ApplicationTable with the doAs user?
I think the Tez entities will be DAGs, Vertices etc. We plan to call the "user" 
in such cases, a subapp_user not a doAs user. 
Each Tez DAG would belong to an app master (hence some app id) but that would 
belong to the Tez session (which is a flow for ATS). As we know, the app 
master/app id is not known to Tez sub app users, so sub app info belongs in 
another table. The application related information still exists for that Tez 
session and goes to the Application table with the user being the one who is 
running the Tez session/app masters etc. The doAs information is no longer 
available to Yarn once the app master starts running. 

In other news, I am working on the patch, hope to upload one shortly. 



> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061305#comment-16061305
 ] 

Hadoop QA commented on YARN-6668:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
16s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
55s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} YARN-1011 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-1011 has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-1011 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} YARN-1011 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
22s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
50s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6668 |
| GITHUB PR | https://github.com/apache/hadoop/pull/241 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 66c5a191eda3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-1011 / 153498b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16231/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16231/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16231/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Buil

[jira] [Commented] (YARN-6678) Committer thread crashes with IllegalStateException in async-scheduling mode of CapacityScheduler

2017-06-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061307#comment-16061307
 ] 

Sunil G commented on YARN-6678:
---

We might need to check the error shown in 
{{TestCapacitySchedulerAsyncScheduling}} alone. Other test failures are not 
related to this patch. I ll also check this test case locally.

> Committer thread crashes with IllegalStateException in async-scheduling mode 
> of CapacityScheduler
> -
>
> Key: YARN-6678
> URL: https://issues.apache.org/jira/browse/YARN-6678
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6678.001.patch, YARN-6678.002.patch, 
> YARN-6678.003.patch
>
>
> Error log:
> {noformat}
> java.lang.IllegalStateException: Trying to reserve container 
> container_e10_1495599791406_7129_01_001453 for application 
> appattempt_1495599791406_7129_01 when currently reserved container 
> container_e10_1495599791406_7123_01_001513 on node host: node0123:45454 
> #containers=40 available=... used=...
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode.reserveResource(FiCaSchedulerNode.java:81)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.reserve(FiCaSchedulerApp.java:1079)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:795)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2770)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run(CapacityScheduler.java:546)
> {noformat}
> Reproduce this problem:
> 1. nm1 re-reserved app-1/container-X1 and generated reserve proposal-1
> 2. nm2 had enough resource for app-1, un-reserved app-1/container-X1 and 
> allocated app-1/container-X2
> 3. nm1 reserved app-2/container-Y
> 4. proposal-1 was accepted but throw IllegalStateException when applying
> Currently the check code for reserve proposal in FiCaSchedulerApp#accept as 
> follows:
> {code}
>   // Container reserved first time will be NEW, after the container
>   // accepted & confirmed, it will become RESERVED state
>   if (schedulerContainer.getRmContainer().getState()
>   == RMContainerState.RESERVED) {
> // Set reReservation == true
> reReservation = true;
>   } else {
> // When reserve a resource (state == NEW is for new container,
> // state == RUNNING is for increase container).
> // Just check if the node is not already reserved by someone
> if (schedulerContainer.getSchedulerNode().getReservedContainer()
> != null) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Try to reserve a container, but the node is "
> + "already reserved by another container="
> + schedulerContainer.getSchedulerNode()
> .getReservedContainer().getContainerId());
>   }
>   return false;
> }
>   }
> {code}
> The reserved container on the node of reserve proposal will be checked only 
> for first-reserve container.
> We should confirm that reserved container on this node is equal to re-reserve 
> container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6737) Rename getApplicationAttempt to getCurrentAttempt in AbstractYarnScheduler/CapacityScheduler

2017-06-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061308#comment-16061308
 ] 

Sunil G commented on YARN-6737:
---

Yes [~Tao Yang]
This will be really helpful. Thanks for raising the same.

> Rename getApplicationAttempt to getCurrentAttempt in 
> AbstractYarnScheduler/CapacityScheduler
> 
>
> Key: YARN-6737
> URL: https://issues.apache.org/jira/browse/YARN-6737
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Priority: Minor
>
> As discussed in YARN-6714 
> (https://issues.apache.org/jira/browse/YARN-6714?focusedCommentId=16052158&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16052158)
> AbstractYarnScheduler#getApplicationAttempt is inconsistent to its name, it 
> discarded application_attempt_id and always return the latest attempt. We 
> should: 1) Rename it to getCurrentAttempt, 2) Change parameter from attemptId 
> to applicationId. 3) Took a scan of all usages to see if any similar issue 
> could happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6727) Improve getQueueUserAcls API to query for specific queue and user

2017-06-23 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6727:
---
Attachment: YARN-6727.WIP.patch

Attaching patch.. Yet to add testcase for the same.

> Improve getQueueUserAcls API to query for  specific queue and user
> --
>
> Key: YARN-6727
> URL: https://issues.apache.org/jira/browse/YARN-6727
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6727.WIP.patch
>
>
> Currently {{ApplicationClientProtocol#getQueueUserAcls}} return data for all 
> the queues available in scheduler for user.
> User wants to know whether he has rights of a particular queue only. For 
> systems with 5K queues returning all queues list is not efficient.
> Suggested change: support additional parameters *userName and queueName* as 
> optional. Admin user should be able to query other users ACL for a particular 
> queueName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest

2017-06-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061315#comment-16061315
 ] 

Sunil G commented on YARN-6280:
---

Thanks [~cltlfcjin], 
[~rohithsharma], could you also please help to check latest patch as YARN-6634 
introduced some recent changes in RMWebServices.

> Add a query parameter in ResourceManager Cluster Applications REST API to 
> control whether or not returns ResourceRequest
> 
>
> Key: YARN-6280
> URL: https://issues.apache.org/jira/browse/YARN-6280
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, restapi
>Affects Versions: 2.7.3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: YARN-6280.001.patch, YARN-6280.002.patch, 
> YARN-6280.003.patch, YARN-6280.004.patch, YARN-6280.005.patch, 
> YARN-6280.006.patch, YARN-6280.007.patch, YARN-6280.008.patch, 
> YARN-6280.009.patch, YARN-6280.010.patch, YARN-6280.011.patch
>
>
> Begin from v2.7, the ResourceManager Cluster Applications REST API returns   
> ResourceRequest list. It's a very large construction in AppInfo.
> As a test, we use below URI to query only 2 results:
> http:// address:port>/ws/v1/cluster/apps?states=running,accepted&limit=2
> The results are very different:
> ||Hadoop version|Total Character|Total Word|Total Lines|Size||
> |2.4.1|1192|  42| 42| 1.2 KB|
> |2.7.1|1222179|   48740|  48735|  1.21 MB|
> Most RESTful API requesters don't know about this after upgraded and their 
> old queries may cause ResourceManager more GC consuming and slower. Even if 
> they know this but have no idea to reduce the impact of ResourceManager 
> except slow down their query frequency.
> The patch adding a query parameter "showResourceRequests" to help requesters 
> who don't need this information to reduce the overhead. In consideration of 
> compatibility of interface, the default value is true if they don't set the 
> parameter, so the behaviour is the same as now.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-23 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16060350#comment-16060350
 ] 

Subru Krishnan edited comment on YARN-3659 at 6/23/17 6:05 PM:
---

Thanks [~giovanni.fumarola] for working on this. I looked at your patch, please 
find my comments below:
* A general note on Javadocs - missing code/link annotations throughout.
* Can you add some documentation for ROUTER_CLIENTRM_SUBMIT_RETRY. You will 
also need to exclude it in {{TestYarnConfigurationFields}}.
* Is it possible to move the check of subcluster list being empty to 
{{AbstractRouterPolicy}} to prevent duplication?
* A note on *isRunning* field in {{MockResourceManagerFacade}} will be useful.
* Rename NO_SUBCLUSTER_MESSAGE to NO_ACTIVE_SUBCLUSTER_AVAILABLE and update the 
value accordingly.
* Why is the visibility of {{RouterUtil}} public? I feel you can also rename it 
to RouterServerUtil.
* Resolution of UGI can be moved to {{AbstractClientRequestInterceptor}} as 
again all child classes are duplicating the same code.
* Would it be better to use *LRUCacheHashMap* for _clientRMProxies_?
* Rename *getApplicationClientProtocolFromSubClusterId* --> 
*getClientRMProxyForSubCluster*.
* Distinguish the log message for different exceptions for clarity.
* Use the {{UniformRandomRouterPolicy}} to determine a random subcluster as 
that's the purpose of the policy.
* {code} Client: behavior as YARN. should be Client: identical behavior as 
{@code ClientRMService}. {code}
* Documentation for exception handling in *submitApplication* is unclear, 
please rephrase it.
* Rename _aclTemp_ to _clientRMProxy_.
* {code}"Unable to create a new application to SubCluster " should be " No 
response when attempting to submit the application " + applicationId + "to 
SubCluster " + subClusterId.getId() {code}.
* The log statement for app submission can be moved to post successful 
submission as it's redundant now.
* We should forward any exceptions we get on *forceKillApplication* and 
*getApplicationReport* to client.
* Have a note upfront saying we have only implemented the core functionalities 
and rest will be done in a follow up JIRA (create and link).
* Use {{FederationStateStoreTestUtil}} for helper methods like 
*registerSubCluster/deRegisterSubCluster/registerSubClusters*.
* In {{TestFederationClientInterceptorRetry}}, assert that 
*getApplicationHomeSubCluster* is the good subCluster.



was (Author: subru):
Thanks [~giovanni.fumarola] for working on this. I looked at your patch, please 
find my comments below:
* A general note on Javadocs - missing code/link annotations throughout.
* Can you add some documentation for ROUTER_CLIENTRM_SUBMIT_RETRY. You will 
also need to exclude it in {{TestYarnConfigurationFields}}.
* Is it possible to move the check of subcluster list being empty to 
{{AbstractRouterPolicy}} to prevent duplication?
* A note on *isRunning* field in {{MockResourceManagerFacade}} will be useful.
* Rename NO_SUBCLUSTER_MESSAGE to NO_ACTIVE_SUBCLUSTER_AVAILABLE and update the 
value accordingly.
* Why is the visibility of {{RouterUtil}} public? I feel you can also rename it 
to RouterServerUtil.
* Resolution of UGI can be moved to {{AbstractClientRequestInterceptor}} as 
again all child classes are duplicating the same code.
* Would it be better to use *LRUCacheHashMap* for _clientRMProxies_?
* Rename *getApplicationClientProtocolFromSubClusterId* --> 
*getClientRMProxyForSubCluster*.
* Distinguish the log message for different exceptions for clarity.
* Use the {{UniformRandomRouterPolicy}} to determine a random subcluster as 
that's the purpose of the policy.
* {code} Client: behavior as YARN. should be Client: identical behavior as 
{@code ClientRMService}. {code}
* Documentation for exception handling in *submitApplication* is unclear, 
please rephrase it.
* Rename _aclTemp_ to _clientRMProxy_.
* {code}"Unable to create a new application to SubCluster " should be " No 
response when attempting to submit the application " + applicationId + "to 
SubCluster " + subClusterId.getId() {code}.
* The log statement for app submission can be moved to post successful 
submission as it's redundant now.
* We should forward any exceptions we get on *forceKillApplication* and 
*getApplicationReport* to client.
* Have a note upfront saying we have only implemented the core functionalities 
and rest will be done in a follow up JIRA (create and link).


> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>  

[jira] [Updated] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-06-23 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6467:
---
Attachment: YARN-6467-branch-2.8.009.patch

> CSQueueMetrics needs to update the current metrics for default partition only
> -
>
> Key: YARN-6467
> URL: https://issues.apache.org/jira/browse/YARN-6467
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
> Attachments: YARN-6467.001.patch, YARN-6467.001.patch, 
> YARN-6467.002.patch, YARN-6467.003.patch, YARN-6467.004.patch, 
> YARN-6467.005.patch, YARN-6467.006.patch, YARN-6467-branch-2.007.patch, 
> YARN-6467-branch-2.008.patch, YARN-6467-branch-2.8.009.patch
>
>
> As a followup to YARN-6195, we need to update existing metrics to only 
> default Partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-06-23 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061346#comment-16061346
 ] 

Manikandan R commented on YARN-6467:


[~Naganarasimha] As discussed, Attaching patch for branch 2.8 as well.

> CSQueueMetrics needs to update the current metrics for default partition only
> -
>
> Key: YARN-6467
> URL: https://issues.apache.org/jira/browse/YARN-6467
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
> Attachments: YARN-6467.001.patch, YARN-6467.001.patch, 
> YARN-6467.002.patch, YARN-6467.003.patch, YARN-6467.004.patch, 
> YARN-6467.005.patch, YARN-6467.006.patch, YARN-6467-branch-2.007.patch, 
> YARN-6467-branch-2.008.patch, YARN-6467-branch-2.8.009.patch
>
>
> As a followup to YARN-6195, we need to update existing metrics to only 
> default Partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061357#comment-16061357
 ] 

Hadoop QA commented on YARN-6467:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m  
0s{color} | {color:red} root in branch-2.8 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.8 
failed with JDK v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.8 failed 
with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} branch-2.8 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.8 
failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.8 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
7s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.8 failed 
with JDK v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2.8 failed 
with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
7s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
6s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  6s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
8s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
7s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed 
with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  8s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  6m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d946387 |
| JIRA Issue | YARN-6467 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874297/YARN-6467-branch-2.8.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 02f3ddb63a9f 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/p

[jira] [Updated] (YARN-1011) [Umbrella] Schedule containers based on utilization of currently allocated containers

2017-06-23 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-1011:
-
Attachment: yarn-1011-design-v3.pdf

Updated design doc based on recent discussion with [~kkaranasos] and 
[~arun.sur...@gmail.com]

> [Umbrella] Schedule containers based on utilization of currently allocated 
> containers
> -
>
> Key: YARN-1011
> URL: https://issues.apache.org/jira/browse/YARN-1011
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Arun C Murthy
>Assignee: Karthik Kambatla
> Attachments: patch-for-yarn-1011.patch, yarn-1011-design-v0.pdf, 
> yarn-1011-design-v1.pdf, yarn-1011-design-v2.pdf, yarn-1011-design-v3.pdf
>
>
> Currently RM allocates containers and assumes resources allocated are 
> utilized.
> RM can, and should, get to a point where it measures utilization of allocated 
> containers and, if appropriate, allocate more (speculative?) containers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6739) Crash NM at start time if oversubscription is on but LinuxContainerExcutor or cgroup is off

2017-06-23 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-6739:


 Summary: Crash NM at start time if oversubscription is on but 
LinuxContainerExcutor or cgroup is off
 Key: YARN-6739
 URL: https://issues.apache.org/jira/browse/YARN-6739
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0-alpha3
Reporter: Haibo Chen
Assignee: Haibo Chen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6739) Crash NM at start time if oversubscription is on but LinuxContainerExcutor or cgroup is off

2017-06-23 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6739:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-1011

> Crash NM at start time if oversubscription is on but LinuxContainerExcutor or 
> cgroup is off
> ---
>
> Key: YARN-6739
> URL: https://issues.apache.org/jira/browse/YARN-6739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5876) TestResourceTrackerService#testGracefulDecommissionWithApp fails intermittently on trunk

2017-06-23 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061387#comment-16061387
 ] 

Yufei Gu commented on YARN-5876:


Thanks [~rkanter] for working on this. Is it possible to make moving nodes from 
{{nodes}} to {{inactiveNodes}} an atomic operation?

> TestResourceTrackerService#testGracefulDecommissionWithApp fails 
> intermittently on trunk
> 
>
> Key: YARN-5876
> URL: https://issues.apache.org/jira/browse/YARN-5876
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Robert Kanter
> Attachments: YARN-5876.001.patch
>
>
> {noformat}
> java.lang.AssertionError: node shouldn't be null
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:750)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testGracefulDecommissionWithApp(TestResourceTrackerService.java:318)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13884/testReport/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6738) LevelDBCacheTimelineStore should reuse ObjectMapper instances

2017-06-23 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated YARN-6738:
---
Attachment: YARN-6738.2.patch

#2) fix findbugs issue

I can't really add a test for this...because this patch only enables to reuse 
ObjectMapper-s

> LevelDBCacheTimelineStore should reuse ObjectMapper instances
> -
>
> Key: YARN-6738
> URL: https://issues.apache.org/jira/browse/YARN-6738
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Zoltan Haindrich
> Attachments: Screen Shot 2017-06-23 at 2.43.06 PM.png, 
> YARN-6738.1.patch, YARN-6738.2.patch
>
>
> Using TezUI sometimes times out...and the cause of it was that the query was 
> quite large; and the leveldb handler seems like recreates the 
> {{ObjectMapper}} for every read - this is unfortunate; since the ObjectMapper 
> have to rescan the class annotations which may take some time.
> Keeping the objectmapper reduces the ATS call time from 17 seconds to 3 
> seconds for me...which was enough to get my tez-ui working again :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061404#comment-16061404
 ] 

Hadoop QA commented on YARN-5006:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
5s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2 failed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-2 failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 386 unchanged - 2 fixed = 387 total (was 388) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 5 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 58s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.

[jira] [Commented] (YARN-6738) LevelDBCacheTimelineStore should reuse ObjectMapper instances

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061410#comment-16061410
 ] 

Hadoop QA commented on YARN-6738:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6738 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874302/YARN-6738.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 27e6dd9e91e1 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / abdea26 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16237/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16237/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LevelDBCacheTimelineStore should reuse ObjectMapper instances
> -
>
> Key: YARN-6738
> URL: https://issues.apache.org/jira/browse/YARN-6738
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Zoltan Haindrich
> Attachments: Screen Shot 2017-06-23 at 2.43.06 PM.png, 
> YARN-6738.1.patch, YARN-6738.2.patch
>
>
> Using TezUI sometimes times out...and the cause of it was that the query was 
> quite large; and the leveldb handler seems li

[jira] [Commented] (YARN-5876) TestResourceTrackerService#testGracefulDecommissionWithApp fails intermittently on trunk

2017-06-23 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061444#comment-16061444
 ] 

Robert Kanter commented on YARN-5876:
-

I don't think so.  We're moving data from one map to another.  Even if you used 
a lock for this, you'd then have to use that lock on all uses of both maps, 
which I don't think we want to do.

> TestResourceTrackerService#testGracefulDecommissionWithApp fails 
> intermittently on trunk
> 
>
> Key: YARN-5876
> URL: https://issues.apache.org/jira/browse/YARN-5876
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Robert Kanter
> Attachments: YARN-5876.001.patch
>
>
> {noformat}
> java.lang.AssertionError: node shouldn't be null
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:750)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testGracefulDecommissionWithApp(TestResourceTrackerService.java:318)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13884/testReport/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5876) TestResourceTrackerService#testGracefulDecommissionWithApp fails intermittently on trunk

2017-06-23 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061456#comment-16061456
 ] 

Yufei Gu commented on YARN-5876:


Make sense to me. +1. 

> TestResourceTrackerService#testGracefulDecommissionWithApp fails 
> intermittently on trunk
> 
>
> Key: YARN-5876
> URL: https://issues.apache.org/jira/browse/YARN-5876
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Robert Kanter
> Attachments: YARN-5876.001.patch
>
>
> {noformat}
> java.lang.AssertionError: node shouldn't be null
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:750)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testGracefulDecommissionWithApp(TestResourceTrackerService.java:318)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13884/testReport/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5876) TestResourceTrackerService#testGracefulDecommissionWithApp fails intermittently on trunk

2017-06-23 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061473#comment-16061473
 ] 

Yufei Gu commented on YARN-5876:


Committed to trunk and branch-2. Thanks [~rkanter] for the patch. 

> TestResourceTrackerService#testGracefulDecommissionWithApp fails 
> intermittently on trunk
> 
>
> Key: YARN-5876
> URL: https://issues.apache.org/jira/browse/YARN-5876
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Robert Kanter
> Attachments: YARN-5876.001.patch
>
>
> {noformat}
> java.lang.AssertionError: node shouldn't be null
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:750)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testGracefulDecommissionWithApp(TestResourceTrackerService.java:318)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13884/testReport/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5876) TestResourceTrackerService#testGracefulDecommissionWithApp fails intermittently on trunk

2017-06-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061484#comment-16061484
 ] 

Hudson commented on YARN-5876:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11917 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11917/])
YARN-5876. TestResourceTrackerService#testGracefulDecommissionWithApp (yufei: 
rev 0b77262890d76b0a3a35fa64befc8a406bc70b27)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java


> TestResourceTrackerService#testGracefulDecommissionWithApp fails 
> intermittently on trunk
> 
>
> Key: YARN-5876
> URL: https://issues.apache.org/jira/browse/YARN-5876
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Robert Kanter
> Fix For: 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-5876.001.patch
>
>
> {noformat}
> java.lang.AssertionError: node shouldn't be null
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:750)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testGracefulDecommissionWithApp(TestResourceTrackerService.java:318)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13884/testReport/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest

2017-06-23 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061497#comment-16061497
 ] 

Rohith Sharma K S commented on YARN-6280:
-

+1 LGTM

> Add a query parameter in ResourceManager Cluster Applications REST API to 
> control whether or not returns ResourceRequest
> 
>
> Key: YARN-6280
> URL: https://issues.apache.org/jira/browse/YARN-6280
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, restapi
>Affects Versions: 2.7.3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: YARN-6280.001.patch, YARN-6280.002.patch, 
> YARN-6280.003.patch, YARN-6280.004.patch, YARN-6280.005.patch, 
> YARN-6280.006.patch, YARN-6280.007.patch, YARN-6280.008.patch, 
> YARN-6280.009.patch, YARN-6280.010.patch, YARN-6280.011.patch
>
>
> Begin from v2.7, the ResourceManager Cluster Applications REST API returns   
> ResourceRequest list. It's a very large construction in AppInfo.
> As a test, we use below URI to query only 2 results:
> http:// address:port>/ws/v1/cluster/apps?states=running,accepted&limit=2
> The results are very different:
> ||Hadoop version|Total Character|Total Word|Total Lines|Size||
> |2.4.1|1192|  42| 42| 1.2 KB|
> |2.7.1|1222179|   48740|  48735|  1.21 MB|
> Most RESTful API requesters don't know about this after upgraded and their 
> old queries may cause ResourceManager more GC consuming and slower. Even if 
> they know this but have no idea to reduce the impact of ResourceManager 
> except slow down their query frequency.
> The patch adding a query parameter "showResourceRequests" to help requesters 
> who don't need this information to reduce the overhead. In consideration of 
> compatibility of interface, the default value is true if they don't set the 
> parameter, so the behaviour is the same as now.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6740) Federation Router (hiding multiple RMs for ApplicationClientProtocol) phase 2

2017-06-23 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-6740:
--

 Summary: Federation Router (hiding multiple RMs for 
ApplicationClientProtocol) phase 2
 Key: YARN-6740
 URL: https://issues.apache.org/jira/browse/YARN-6740
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Giovanni Matteo Fumarola


This JIRA tracks the implementation of the layer for routing 
ApplicaitonClientProtocol requests to the appropriate RM(s) in a federated YARN 
cluster.
Under the YARN-3659 we only implemented getNewApplication, submitApplication, 
forceKillApplication and getApplicationReport to execute applications E2E.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-23 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5067:
---
Attachment: YARN-5067.003.patch

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-23 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061656#comment-16061656
 ] 

Yufei Gu commented on YARN-5067:


Patch v3 uploaded. It rebased and make sure that SLSConfiguration doesn't 
extend Configuration. I've tested the patch for both CS and FS with a simple 
SLS json input and the configuration in sls-runner.xml. It worked well.

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6671) Add container type awareness in ResourceHandlers.

2017-06-23 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-6671.
--
Resolution: Not A Problem

> Add container type awareness in ResourceHandlers.
> -
>
> Key: YARN-6671
> URL: https://issues.apache.org/jira/browse/YARN-6671
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>
> When using LCE, different cgroup settings for opportunistic and guaranteed 
> containers can be used to ensure isolation between them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6671) Add container type awareness in ResourceHandlers.

2017-06-23 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061671#comment-16061671
 ] 

Haibo Chen commented on YARN-6671:
--

Container object is passed in ResourceHandlers, and container type can be found 
out by checking the executionType of the container. In other words, container 
type awareness is already available in ResourceHandler. Closing this as already 
fixed.

> Add container type awareness in ResourceHandlers.
> -
>
> Key: YARN-6671
> URL: https://issues.apache.org/jira/browse/YARN-6671
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
>
> When using LCE, different cgroup settings for opportunistic and guaranteed 
> containers can be used to ensure isolation between them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6668) Use cgroup to get container resource utilization

2017-06-23 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6668:
-
Attachment: YARN-6668.001.patch

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061687#comment-16061687
 ] 

Hadoop QA commented on YARN-5067:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 1 
new + 56 unchanged - 0 fixed = 57 total (was 56) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874329/YARN-5067.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c0914cbef9e1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0111711 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16238/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16238/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16238/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-

[jira] [Updated] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-23 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-3659:
---
Attachment: YARN-3659-YARN-2915.2.patch

> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf, YARN-3659-YARN-2915.1.patch, 
> YARN-3659-YARN-2915.2.patch, YARN-3659-YARN-2915.draft.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-23 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061692#comment-16061692
 ] 

Giovanni Matteo Fumarola commented on YARN-3659:


Thanks [~subru] for the feedbacks. Attached V2 that fixes them. Answers in 
order.
* Doing.
* Done.
* Right now *AbstractRouterPolicy* has no the list of the active. The 
validation is done in the *FederationPolicyUtils*. Add a respective test.
* Done.
* Done.
* Done.
* Done.
* I use the ConcurrentHashMap for multithreading reasons.
* Done.
* Done.
* After an offline discussion, we decided to ignore this comment. The current 
policy does not allow to use the logic itself without initialization.
* Done.
* Done.
* Done.
* Done.
* Done.
* Done.

> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf, YARN-3659-YARN-2915.1.patch, 
> YARN-3659-YARN-2915.2.patch, YARN-3659-YARN-2915.draft.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-23 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061692#comment-16061692
 ] 

Giovanni Matteo Fumarola edited comment on YARN-3659 at 6/24/17 1:14 AM:
-

Thanks [~subru] for the feedbacks. Attached V2 that fixes them. Answers in 
order.
* Done.
* Done.
* Right now *AbstractRouterPolicy* has no the list of the active. The 
validation is done in the *FederationPolicyUtils*. Add a respective test.
* Done.
* Done.
* Done.
* Done.
* I use the ConcurrentHashMap for multithreading reasons.
* Done.
* Done.
* After an offline discussion, we decided to ignore this comment. The current 
policy does not allow to use the logic itself without initialization.
* Done.
* Done.
* Done.
* Done.
* Done.
* Done.


was (Author: giovanni.fumarola):
Thanks [~subru] for the feedbacks. Attached V2 that fixes them. Answers in 
order.
* Doing.
* Done.
* Right now *AbstractRouterPolicy* has no the list of the active. The 
validation is done in the *FederationPolicyUtils*. Add a respective test.
* Done.
* Done.
* Done.
* Done.
* I use the ConcurrentHashMap for multithreading reasons.
* Done.
* Done.
* After an offline discussion, we decided to ignore this comment. The current 
policy does not allow to use the logic itself without initialization.
* Done.
* Done.
* Done.
* Done.
* Done.
* Done.

> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf, YARN-3659-YARN-2915.1.patch, 
> YARN-3659-YARN-2915.2.patch, YARN-3659-YARN-2915.draft.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061720#comment-16061720
 ] 

Hadoop QA commented on YARN-6668:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
37s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} YARN-1011 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-1011 has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-1011 has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} YARN-1011 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
35s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
17s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6668 |
| GITHUB PR | https://github.com/apache/hadoop/pull/241 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9a4b495b0bb7 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-1011 / 153498b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16239/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16239/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16239/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/

[jira] [Commented] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-23 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061722#comment-16061722
 ] 

Subru Krishnan commented on YARN-3659:
--

Thanks [~giovanni.fumarola] for addressing my feedback. The latest patch LGTM, 
have a couple of nits:
* *setupUser* can be done before initializing the next interceptor in 
{{AbstractClientRequestInterceptor}}.
* A few comments on the logging in 
{{FederationClientInterceptor::submitApplication}}:
** The log statement for submitted should be after the null check for 
_response_.
** The log statement for app submitted exception and null  _response_ can be 
warn as we don't exit but continue with another subcluster.


> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf, YARN-3659-YARN-2915.1.patch, 
> YARN-3659-YARN-2915.2.patch, YARN-3659-YARN-2915.draft.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061727#comment-16061727
 ] 

Hadoop QA commented on YARN-3659:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
34s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
22s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 10 new + 225 unchanged - 1 fixed = 235 total (was 226) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router
 |
|  |  Read of unwritten field user in 
org.apache.hadoop.yarn.server.router.clientrm.DefaultClientRequestInterceptor.init(String)
  At DefaultClientRequestInterceptor.java:in 
org.apache.hadoop.yarn.server.router.clientrm.DefaultClientRequestInterceptor.init(String)
  At DefaultClientRequestInterceptor.java:[line 109] |
|  |  Field only ever set to null:null: 
org.apache.hadoop.yarn.server.router.clientrm.DefaultClientRequestInterceptor.user
  In DefaultClientRequestInterceptor.java |
|  |  Dead store to subClustersActive in 
org.apache.hadoop.yarn.serv

[jira] [Comment Edited] (YARN-3659) Federation Router (hiding multiple RMs for ApplicationClientProtocol)

2017-06-23 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061722#comment-16061722
 ] 

Subru Krishnan edited comment on YARN-3659 at 6/24/17 2:02 AM:
---

Thanks [~giovanni.fumarola] for addressing my feedback. The latest patch LGTM 
(pending Yetus), have a couple of nits:
* *setupUser* can be done before initializing the next interceptor in 
{{AbstractClientRequestInterceptor}}.
* A few comments on {{FederationClientInterceptor::submitApplication}}:
** Avoid the continues by switching the checks, i.e. return _response_ if not 
null, otherwise log warn followed by completion of loop iteration.
** We should throw exception instead of returning nulls to be consistent with 
*ClientRMService*.
** Fix the log statements based on the above changes.



was (Author: subru):
Thanks [~giovanni.fumarola] for addressing my feedback. The latest patch LGTM, 
have a couple of nits:
* *setupUser* can be done before initializing the next interceptor in 
{{AbstractClientRequestInterceptor}}.
* A few comments on the logging in 
{{FederationClientInterceptor::submitApplication}}:
** The log statement for submitted should be after the null check for 
_response_.
** The log statement for app submitted exception and null  _response_ can be 
warn as we don't exit but continue with another subcluster.


> Federation Router (hiding multiple RMs for ApplicationClientProtocol)
> -
>
> Key: YARN-3659
> URL: https://issues.apache.org/jira/browse/YARN-3659
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-3659.pdf, YARN-3659-YARN-2915.1.patch, 
> YARN-3659-YARN-2915.2.patch, YARN-3659-YARN-2915.draft.patch
>
>
> This JIRA tracks the design/implementation of the layer for routing 
> ApplicaitonClientProtocol requests to the appropriate
> RM(s) in a federated YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-06-23 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061757#comment-16061757
 ] 

Bibin A Chundatt commented on YARN-5006:


Branch-2 failure is due to HADOOP-14146. Have triggered build again

> ResourceManager quit due to ApplicationStateData exceed the limit  size of 
> znode in zk
> --
>
> Key: YARN-5006
> URL: https://issues.apache.org/jira/browse/YARN-5006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0, 2.7.2
>Reporter: dongtingting
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5006.001.patch, YARN-5006.002.patch, 
> YARN-5006.003.patch, YARN-5006.004.patch, YARN-5006.005.patch, 
> YARN-5006-branch-2.005.patch
>
>
> Client submit a job, this job add 1 file into DistributedCache. when the 
> job is submitted, ResourceManager sotre ApplicationStateData into zk. 
> ApplicationStateData  is exceed the limit size of znode. RM exit 1.   
> The related code in RMStateStore.java :
> {code}
>   private static class StoreAppTransition
>   implements SingleArcTransition {
> @Override
> public void transition(RMStateStore store, RMStateStoreEvent event) {
>   if (!(event instanceof RMStateStoreAppEvent)) {
> // should never happen
> LOG.error("Illegal event type: " + event.getClass());
> return;
>   }
>   ApplicationState appState = ((RMStateStoreAppEvent) 
> event).getAppState();
>   ApplicationId appId = appState.getAppId();
>   ApplicationStateData appStateData = ApplicationStateData
>   .newInstance(appState);
>   LOG.info("Storing info for app: " + appId);
>   try {  
> store.storeApplicationStateInternal(appId, appStateData);  //store 
> the appStateData
> store.notifyApplication(new RMAppEvent(appId,
>RMAppEventType.APP_NEW_SAVED));
>   } catch (Exception e) {
> LOG.error("Error storing app: " + appId, e);
> store.notifyStoreOperationFailed(e);   //handle fail event, system 
> exit 
>   }
> };
>   }
> {code}
> The Exception log:
> {code}
>  ...
> 2016-04-20 11:26:35,732 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore 
> AsyncDispatcher event handler: Maxed out ZK retries. Giving up!
> 2016-04-20 11:26:35,732 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore 
> AsyncDispatcher event handler: Error storing app: 
> application_1461061795989_17671
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode 
> = ConnectionLoss
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:936)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1075)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1096)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:933)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:947)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:956)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:626)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:138)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:123)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.jav

[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-06-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061777#comment-16061777
 ] 

ASF GitHub Bot commented on YARN-6668:
--

Github user xshaun commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/227#discussion_r123869480
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
 ---
@@ -60,6 +60,7 @@
 import org.apache.hadoop.yarn.server.api.records.NodeHealthStatus;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManager;
 import 
org.apache.hadoop.yarn.server.nodemanager.collectormanager.NMCollectorService;
+import org.apache.hadoop.yarn.server.api.records.OverAllocationInfo;
--- End diff --

Is it nice to insert this line between 60th and 61th line ? like these:
```
import org.apache.hadoop.yarn.server.api.records.NodeHealthStatus;
import org.apache.hadoop.yarn.server.api.records.OverAllocationInfo;
import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManager;
```


> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6727) Improve getQueueUserAcls API to query for specific queue and user

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061789#comment-16061789
 ] 

Hadoop QA commented on YARN-6727:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 39 unchanged - 1 fixed = 39 total (was 40) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6727 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874295/YARN-6727.WIP.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 86f89542d43f 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0111711 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16242/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://buil

[jira] [Commented] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk

2017-06-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061805#comment-16061805
 ] 

Hadoop QA commented on YARN-5006:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
32s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 386 unchanged - 2 fixed = 387 total (was 388) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 5 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMRestart |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.resourc