[jira] [Updated] (YARN-6771) Use classloader inside configuration class to make new classes

2017-07-06 Thread Jongyoul Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jongyoul Lee updated YARN-6771:
---
Description: While running {{RpcClientFactoryPBImpl.getClient}}, 
{{RpcClientFactoryPBImpl}} uses {{localConf.getClassByName}}. But in case of 
using custom classloader, we have to use {{conf.getClassByName}} because custom 
classloader is already stored in {Configuration} class.  (was: While running 
{{RpcClientFactoryPBImpl.getClient}}, {{RpcClientFactoryPBImpl}} uses 
{{local.getClassByName}}. But in case of using custom classloader, we have to 
use {{conf.getClassByName}} because custom classloader is already stored in 
{Configuration} class.)

> Use classloader inside configuration class to make new classes 
> ---
>
> Key: YARN-6771
> URL: https://issues.apache.org/jira/browse/YARN-6771
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jongyoul Lee
>
> While running {{RpcClientFactoryPBImpl.getClient}}, 
> {{RpcClientFactoryPBImpl}} uses {{localConf.getClassByName}}. But in case of 
> using custom classloader, we have to use {{conf.getClassByName}} because 
> custom classloader is already stored in {Configuration} class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6771) Use classloader inside configuration class to make new classes

2017-07-06 Thread Jongyoul Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jongyoul Lee updated YARN-6771:
---
Description: While running {{RpcClientFactoryPBImpl.getClient}}, 
{{RpcClientFactoryPBImpl}} uses {{localConf.getClassByName}}. But in case of 
using custom classloader, we have to use {{conf.getClassByName}} because custom 
classloader is already stored in {{Configuration}} class.  (was: While running 
{{RpcClientFactoryPBImpl.getClient}}, {{RpcClientFactoryPBImpl}} uses 
{{localConf.getClassByName}}. But in case of using custom classloader, we have 
to use {{conf.getClassByName}} because custom classloader is already stored in 
{Configuration} class.)

> Use classloader inside configuration class to make new classes 
> ---
>
> Key: YARN-6771
> URL: https://issues.apache.org/jira/browse/YARN-6771
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Jongyoul Lee
>
> While running {{RpcClientFactoryPBImpl.getClient}}, 
> {{RpcClientFactoryPBImpl}} uses {{localConf.getClassByName}}. But in case of 
> using custom classloader, we have to use {{conf.getClassByName}} because 
> custom classloader is already stored in {{Configuration}} class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6771) Use classloader inside configuration class to make new classes

2017-07-06 Thread Jongyoul Lee (JIRA)
Jongyoul Lee created YARN-6771:
--

 Summary: Use classloader inside configuration class to make new 
classes 
 Key: YARN-6771
 URL: https://issues.apache.org/jira/browse/YARN-6771
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.8.1
Reporter: Jongyoul Lee


While running {{RpcClientFactoryPBImpl.getClient}}, {{RpcClientFactoryPBImpl}} 
uses {{local.getClassByName}}. But in case of using custom classloader, we have 
to use {{conf.getClassByName}} because custom classloader is already stored in 
{Configuration} class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6428) Queue AM limit is not honored in CS always

2017-07-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077617#comment-16077617
 ] 

Sunil G commented on YARN-6428:
---

Thanks [~naganarasimha...@apache.org] and [~bibinchundatt]

I think latest suggestion from Bibin seems more easier and straight forward.  1 
for this approach. I am not seeing much of a pblm, a jenkins run is needed

> Queue AM limit is not honored  in CS always
> ---
>
> Key: YARN-6428
> URL: https://issues.apache.org/jira/browse/YARN-6428
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6428.0001.patch, YARN-6428.0002.patch
>
>
> Steps to reproduce
> 
> Setup cluster with 40 GB and 40 vcores with 4 Node managers with 10 GB each.
> Configure 100% to default queue as capacity and max am limit as 10 %
> Minimum scheduler memory and vcore as 512,1
> *Expected* 
> AM limit 4096 and 4 vores
> *Actual*
> AM limit 4096+512 and 4+1 vcore



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6770) [Docs] A small mistake in the example of TimelineClient

2017-07-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077587#comment-16077587
 ] 

Naganarasimha G R commented on YARN-6770:
-

Simple fix, will merge shortly !

> [Docs] A small mistake in the example of TimelineClient
> ---
>
> Key: YARN-6770
> URL: https://issues.apache.org/jira/browse/YARN-6770
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6770.patch
>
>
> I'm trying to use timeline client, then I copy the 
> [example|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Publishing_of_application_specific_data]
>  into my application.
> But there is a small mistake here:
> {quote}
> myDomain.*_setID_*("MyDomain");
> .
> myEntity.*_setEntityID_*("MyApp1")
> {quote}
> The correct one should be 
> {quote}
> myDomain.*_setId_*("MyDomain");
> .
> myEntity._*setEntityId*_("MyApp1");
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6720) Support updating FPGA related constraint node label after FPGA device re-configuration

2017-07-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077585#comment-16077585
 ] 

Naganarasimha G R commented on YARN-6720:
-

Thanks [~tangzhankun]
bq.  a constraint label update after container finish to indicate docker image 
has been localized is helpful to improve the scheduling. 
This was one of the improvements which i had in my mind, to automatically add 
labels to the nodes for the localized Container images. We will develop it once 
YARN-3409 is in. This is similar to the docker swarm functionality.

bq.  For instance, GPU handler for all different vendor might need to set a 
constraint "GPU_DOCKER_IMAGE_LOCALIZED:True/False" to a node? FPGA handler for 
all vendor might need set "FPGA_IP_NAME:ipname"? If so, is it a burden for end 
users to search and use these scheduling preference?
IIUC you are setting labels for "GPU_DOCKER_IMAGE_LOCALIZED:True/False" and/or 
"FPGA_IP_NAME:ipname", so not many constraints (named newly as attribute ) 
right ? Can you elaborate more to understand the use case ?

> Support updating FPGA related constraint node label after FPGA device 
> re-configuration
> --
>
> Key: YARN-6720
> URL: https://issues.apache.org/jira/browse/YARN-6720
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
> Attachments: 
> Storing-and-Updating-extra-FPGA-resource-attributes-in-hdfs_v1.pdf
>
>
> In order to provide a global optimal scheduling for mutable FPGA resource, it 
> seems an easy and direct way to utilize constraint node labels(YARN-3409) 
> instead of extending the global scheduler(YARN-3926) to match both resource 
> count and attributes.
> The rough idea is that the AM sets the constraint node label expression to 
> request containers on the nodes whose FPGA devices has the matching IP, and 
> then NM resource handler update the node constraint label if there's FPGA 
> device re-configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6770) [Docs] A small mistake in the example of TimelineClient

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077550#comment-16077550
 ] 

Hadoop QA commented on YARN-6770:
-

| (/) *{color:green} 1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green} 1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6770 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876020/YARN-6770.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux f36952d7dc3c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7576a68 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16321/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Docs] A small mistake in the example of TimelineClient
> ---
>
> Key: YARN-6770
> URL: https://issues.apache.org/jira/browse/YARN-6770
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6770.patch
>
>
> I'm trying to use timeline client, then I copy the 
> [example|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Publishing_of_application_specific_data]
>  into my application.
> But there is a small mistake here:
> {quote}
> myDomain.*_setID_*("MyDomain");
> .
> myEntity.*_setEntityID_*("MyApp1")
> {quote}
> The correct one should be 
> {quote}
> myDomain.*_setId_*("MyDomain");
> .
> myEntity._*setEntityId*_("MyApp1");
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6770) [Docs] A small mistake in the example of TimelineClient

2017-07-06 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6770:

Labels: newbie  (was: )

> [Docs] A small mistake in the example of TimelineClient
> ---
>
> Key: YARN-6770
> URL: https://issues.apache.org/jira/browse/YARN-6770
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-6770.patch
>
>
> I'm trying to use timeline client, then I copy the 
> [example|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Publishing_of_application_specific_data]
>  into my application.
> But there is a small mistake here:
> {quote}
> myDomain.*_setID_*("MyDomain");
> .
> myEntity.*_setEntityID_*("MyApp1")
> {quote}
> The correct one should be 
> {quote}
> myDomain.*_setId_*("MyDomain");
> .
> myEntity._*setEntityId*_("MyApp1");
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6770) [Docs] A small mistake in the example of TimelineClient

2017-07-06 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R reassigned YARN-6770:
---

Assignee: Jinjiang Ling

> [Docs] A small mistake in the example of TimelineClient
> ---
>
> Key: YARN-6770
> URL: https://issues.apache.org/jira/browse/YARN-6770
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Reporter: Jinjiang Ling
>Assignee: Jinjiang Ling
>Priority: Trivial
> Attachments: YARN-6770.patch
>
>
> I'm trying to use timeline client, then I copy the 
> [example|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Publishing_of_application_specific_data]
>  into my application.
> But there is a small mistake here:
> {quote}
> myDomain.*_setID_*("MyDomain");
> .
> myEntity.*_setEntityID_*("MyApp1")
> {quote}
> The correct one should be 
> {quote}
> myDomain.*_setId_*("MyDomain");
> .
> myEntity._*setEntityId*_("MyApp1");
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-06 Thread daemon (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daemon updated YARN-6769:
-
Comment: was deleted

(was: 
{code:java}
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
index f8cdb45929..e930b80e45 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
@@ -79,6 +79,19 @@ public String getName() {
 
 @Override
 public int compare(Schedulable s1, Schedulable s2) {
+  Resource demand1 = s1.getDemand();
+  Resource demand2 = s2.getDemand();
+  // Put the schedulable which does not require resource to
+  // the end. So the other schedulable can get resource as soon as
+  // possible though it use resource greater then it minShare or demand.
+  if (demand1.equals(Resources.none()) &&
+  !demand2.equals(Resources.none())) {
+return 1;
+  } else if (demand2.equals(Resources.none()) &&
+  !demand1.equals(Resources.none())) {
+return -1;
+  }
+  
   double minShareRatio1, minShareRatio2;
   double useToWeightRatio1, useToWeightRatio2;
   double weight1, weight2;
@@ -86,9 +99,9 @@ public int compare(Schedulable s1, Schedulable s2) {
   Resource resourceUsage1 = s1.getResourceUsage();
   Resource resourceUsage2 = s2.getResourceUsage();
   Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,
-  s1.getMinShare(), s1.getDemand());
+  s1.getMinShare(), demand1);
   Resource minShare2 = Resources.min(RESOURCE_CALCULATOR, null,
-  s2.getMinShare(), s2.getDemand());
+  s2.getMinShare(), demand2);
   boolean s1Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
   resourceUsage1, minShare1);
   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,

{code}
)

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Priority: Minor
> Fix For: 2.9.0
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-06 Thread daemon (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daemon updated YARN-6769:
-
Comment: was deleted

(was: diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
index f8cdb45929..e930b80e45 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java

b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
@@ -79,6  79,19 @@ public String getName() {
 
 @Override
 public int compare(Schedulable s1, Schedulable s2) {
   Resource demand1 = s1.getDemand();
   Resource demand2 = s2.getDemand();
   // Put the schedulable which does not require resource to
   // the end. So the other schedulable can get resource as soon as
   // possible though it use resource greater then it minShare or demand.
   if (demand1.equals(Resources.none()) &&
   !demand2.equals(Resources.none())) {
 return 1;
   } else if (demand2.equals(Resources.none()) &&
   !demand1.equals(Resources.none())) {
 return -1;
   }
   
   double minShareRatio1, minShareRatio2;
   double useToWeightRatio1, useToWeightRatio2;
   double weight1, weight2;
@@ -86,9  99,9 @@ public int compare(Schedulable s1, Schedulable s2) {
   Resource resourceUsage1 = s1.getResourceUsage();
   Resource resourceUsage2 = s2.getResourceUsage();
   Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,
-  s1.getMinShare(), s1.getDemand());
   s1.getMinShare(), demand1);
   Resource minShare2 = Resources.min(RESOURCE_CALCULATOR, null,
-  s2.getMinShare(), s2.getDemand());
   s2.getMinShare(), demand2);
   boolean s1Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
   resourceUsage1, minShare1);
   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
)

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Priority: Minor
> Fix For: 2.9.0
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6720) Support updating FPGA related constraint node label after FPGA device re-configuration

2017-07-06 Thread Zhankun Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077524#comment-16077524
 ] 

Zhankun Tang commented on YARN-6720:


[~wangda], I think this is depend on YARN-3409's constraint label APIs.
Agree that for GPU, a constraint label update after container finish to 
indicate docker image has been localized is helpful to improve the scheduling. 
Our idea of updating FPGA IP constraint label is same to this.

One thing uncertain in my mind is that how can we make these constraint labels 
easy to use? Do we need to define plenty of constant key strings? For instance, 
GPU handler for all different vendor might need to set a constraint 
"GPU_DOCKER_IMAGE_LOCALIZED:True/False" to a node?  FPGA handler for all vendor 
might need set "FPGA_IP_NAME:ipname"?  If so, is it a burden for end users to 
search and use these scheduling preference? 

> Support updating FPGA related constraint node label after FPGA device 
> re-configuration
> --
>
> Key: YARN-6720
> URL: https://issues.apache.org/jira/browse/YARN-6720
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
> Attachments: 
> Storing-and-Updating-extra-FPGA-resource-attributes-in-hdfs_v1.pdf
>
>
> In order to provide a global optimal scheduling for mutable FPGA resource, it 
> seems an easy and direct way to utilize constraint node labels(YARN-3409) 
> instead of extending the global scheduler(YARN-3926) to match both resource 
> count and attributes.
> The rough idea is that the AM sets the constraint node label expression to 
> request containers on the nodes whose FPGA devices has the matching IP, and 
> then NM resource handler update the node constraint label if there's FPGA 
> device re-configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-06 Thread daemon (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077525#comment-16077525
 ] 

daemon commented on YARN-6769:
--

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
index f8cdb45929..e930b80e45 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java

b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
@@ -79,6  79,19 @@ public String getName() {
 
 @Override
 public int compare(Schedulable s1, Schedulable s2) {
   Resource demand1 = s1.getDemand();
   Resource demand2 = s2.getDemand();
   // Put the schedulable which does not require resource to
   // the end. So the other schedulable can get resource as soon as
   // possible though it use resource greater then it minShare or demand.
   if (demand1.equals(Resources.none()) &&
   !demand2.equals(Resources.none())) {
 return 1;
   } else if (demand2.equals(Resources.none()) &&
   !demand1.equals(Resources.none())) {
 return -1;
   }
   
   double minShareRatio1, minShareRatio2;
   double useToWeightRatio1, useToWeightRatio2;
   double weight1, weight2;
@@ -86,9  99,9 @@ public int compare(Schedulable s1, Schedulable s2) {
   Resource resourceUsage1 = s1.getResourceUsage();
   Resource resourceUsage2 = s2.getResourceUsage();
   Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,
-  s1.getMinShare(), s1.getDemand());
   s1.getMinShare(), demand1);
   Resource minShare2 = Resources.min(RESOURCE_CALCULATOR, null,
-  s2.getMinShare(), s2.getDemand());
   s2.getMinShare(), demand2);
   boolean s1Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
   resourceUsage1, minShare1);
   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,


> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Priority: Minor
> Fix For: 2.9.0
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-06 Thread daemon (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daemon updated YARN-6769:
-
Comment: was deleted

(was: diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
index f8cdb45929..e930b80e45 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
@@ -79,6 +79,19 @@ public String getName() {
 
 @Override
 public int compare(Schedulable s1, Schedulable s2) {
+  Resource demand1 = s1.getDemand();
+  Resource demand2 = s2.getDemand();
+  // Put the schedulable which does not require resource to
+  // the end. So the other schedulable can get resource as soon as
+  // possible though it use resource greater then it minShare or demand.
+  if (demand1.equals(Resources.none()) &&
+  !demand2.equals(Resources.none())) {
+return 1;
+  } else if (demand2.equals(Resources.none()) &&
+  !demand1.equals(Resources.none())) {
+return -1;
+  }
+  
   double minShareRatio1, minShareRatio2;
   double useToWeightRatio1, useToWeightRatio2;
   double weight1, weight2;
@@ -86,9 +99,9 @@ public int compare(Schedulable s1, Schedulable s2) {
   Resource resourceUsage1 = s1.getResourceUsage();
   Resource resourceUsage2 = s2.getResourceUsage();
   Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,
-  s1.getMinShare(), s1.getDemand());
+  s1.getMinShare(), demand1);
   Resource minShare2 = Resources.min(RESOURCE_CALCULATOR, null,
-  s2.getMinShare(), s2.getDemand());
+  s2.getMinShare(), demand2);
   boolean s1Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
   resourceUsage1, minShare1);
   boolean s2Needy = Resources.lessThan(RESOURCE_CALCULATOR, null,
)

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Priority: Minor
> Fix For: 2.9.0
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6770) [Docs] A small mistake in the example of TimelineClient

2017-07-06 Thread Jinjiang Ling (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinjiang Ling updated YARN-6770:

Attachment: YARN-6770.patch

> [Docs] A small mistake in the example of TimelineClient
> ---
>
> Key: YARN-6770
> URL: https://issues.apache.org/jira/browse/YARN-6770
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: docs
>Reporter: Jinjiang Ling
>Priority: Trivial
> Attachments: YARN-6770.patch
>
>
> I'm trying to use timeline client, then I copy the 
> [example|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Publishing_of_application_specific_data]
>  into my application.
> But there is a small mistake here:
> {quote}
> myDomain.*_setID_*("MyDomain");
> .
> myEntity.*_setEntityID_*("MyApp1")
> {quote}
> The correct one should be 
> {quote}
> myDomain.*_setId_*("MyDomain");
> .
> myEntity._*setEntityId*_("MyApp1");
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6770) [Docs] A small mistake in the example of TimelineClient

2017-07-06 Thread Jinjiang Ling (JIRA)
Jinjiang Ling created YARN-6770:
---

 Summary: [Docs] A small mistake in the example of TimelineClient
 Key: YARN-6770
 URL: https://issues.apache.org/jira/browse/YARN-6770
 Project: Hadoop YARN
  Issue Type: Bug
  Components: docs
Reporter: Jinjiang Ling
Priority: Trivial


I'm trying to use timeline client, then I copy the 
[example|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Publishing_of_application_specific_data]
 into my application.
But there is a small mistake here:
{quote}
myDomain.*_setID_*("MyDomain");
.
myEntity.*_setEntityID_*("MyApp1")
{quote}
The correct one should be 
{quote}
myDomain.*_setId_*("MyDomain");
.
myEntity._*setEntityId*_("MyApp1");
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-06 Thread daemon (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daemon updated YARN-6769:
-
Description: 
When use fairsheduler as RM scheduler, before assign container we will sort all 
queues or applications. 
We will use FairSharePolicy#compare as the comparator, but the comparator is 
not so perfect.
It have a problem as blow:
1. when a queue use resource over minShare(minResources), it will put behind 
the queue whose demand is zeor.
so it will greater opportunity to get the resource although it do not want. It 
will waste schedule time when assign container
to queue or application.

I have fix it, and I will upload the patch to the jira.

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Priority: Minor
> Fix For: 2.9.0
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-06 Thread daemon (JIRA)
daemon created YARN-6769:


 Summary: Put the no demand queue after the most in 
FairSharePolicy#compare
 Key: YARN-6769
 URL: https://issues.apache.org/jira/browse/YARN-6769
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.7.2
Reporter: daemon
Priority: Minor
 Fix For: 2.9.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6410) FSContext.scheduler should be final

2017-07-06 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077459#comment-16077459
 ] 

Yeliang Cang commented on YARN-6410:


Ok, thank you !

> FSContext.scheduler should be final
> ---
>
> Key: YARN-6410
> URL: https://issues.apache.org/jira/browse/YARN-6410
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6410-001.patch
>
>
> {code}
>   private FairScheduler scheduler;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077432#comment-16077432
 ] 

Hadoop QA commented on YARN-6704:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green} 1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 16m 
45s{color} | {color:green} YARN-2915 passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} YARN-2915 passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} YARN-2915 passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} YARN-2915 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-2915 has 5 extant Findbugs warnings. {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 5 new   
18 unchanged - 0 fixed = 23 total (was 18) {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new   5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 2 new   162 unchanged - 0 fixed = 164 total (was 162) {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 49s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green} 1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.recover(Map)
  At FederationInterceptor.java:is not thrown in 
org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor.recover(Map)
  At FederationInterceptor.java:[line 302] |
| Failed junit tests | 
hadoop.yarn.server.nodemanager.amrmproxy.TestAMRMProxyService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6704 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875980/YARN-6704-YARN-2915.v1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 91e64df40d62 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 

[jira] [Commented] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077408#comment-16077408
 ] 

Hadoop QA commented on YARN-6757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green} 1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green} 1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green} 13m 
17s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green} 1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6757 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875999/YARN-6757.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux a0a30a72ee99 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7576a68 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16318/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16318/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16318/console |
| 

[jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077394#comment-16077394
 ] 

Hadoop QA commented on YARN-5892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
30s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green} 1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  2m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 20 new   661 unchanged - 1 fixed = 681 total (was 662) {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_131
 with JDK v1.8.0_131 generated 5 new   877 unchanged - 0 fixed = 882 total (was 
877) {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green}  2m 
22s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 28s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-yarn-site in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green} 1{color} | {color:green} asflicense 

[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077356#comment-16077356
 ] 

Hadoop QA commented on YARN-4266:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green} 1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  9m 
11s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} cc {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  5m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new   226 unchanged - 0 fixed = 235 total (was 226) {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green} 13m 
41s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green} 1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-4266 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875993/YARN-4266.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 29030cad3302 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7576a68 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16317/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16317/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 

[jira] [Assigned] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-07-06 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-6757:
--

Assignee: Yufei Gu  (was: Miklos Szegedi)

> Refactor the usage of 
> yarn.nodemanager.linux-container-executor.cgroups.mount-path
> --
>
> Key: YARN-6757
> URL: https://issues.apache.org/jira/browse/YARN-6757
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-6757.000.patch, YARN-6757.001.patch
>
>
> We should add the ability to specify a custom cgroup path. This is how the 
> documentation of {{linux-container-executor.cgroups.mount-path}} would look 
> like:
> {noformat}
> Requested cgroup mount path. Yarn has built in functionality to discover
> the system cgroup mount paths, so use this setting only, if the discovery 
> does not work.
> This path must exist before the NodeManager is launched.
> The location can vary depending on the Linux distribution in use.
> Common locations include /sys/fs/cgroup and /cgroup.
> If cgroups are not mounted, set 
> yarn.nodemanager.linux-container-executor.cgroups.mount
> to true. In this case it specifies, where the LCE should attempt to mount 
> cgroups if not found.
> If cgroups is accessible through lxcfs or some other file system,
> then set this path and 
> yarn.nodemanager.linux-container-executor.cgroups.mount to false.
> Yarn tries to use this path first, before any cgroup mount point 
> discovery.
> If it cannot find this directory, it falls back to searching for cgroup 
> mount points in the system.
> Only used when the LCE resources handler is set to the 
> CgroupsLCEResourcesHandler
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6355) Preprocessor framework for AM and Client interactions with the RM

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077337#comment-16077337
 ] 

Hadoop QA commented on YARN-6355:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green} 1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 14m 
31s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  9m 
22s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 20 new   236 unchanged - 15 fixed = 256 total (was 251) {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new   0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 34s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
31s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Should 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService$AMSProcessorChain
 be a _static_ inner class?  At ApplicationMasterService.java:inner class?  At 
ApplicationMasterService.java:[lines 84-113] |
|  |  Dead store to appAttempt in 
org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService$OCAInterceptor.afterAllocate(ApplicationAttemptId,
 AllocateRequest, AllocateResponse, Object)  At 
OpportunisticContainerAllocatorAMService.java:org.apache.hadoop.yarn.server.resourcemanager.OpportunisticContainerAllocatorAMService$OCAInterceptor.afterAllocate(ApplicationAttemptId,
 AllocateRequest, AllocateResponse, Object)  At 
OpportunisticContainerAllocatorAMService.java:[line 167] |
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker 

[jira] [Commented] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-07-06 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077330#comment-16077330
 ] 

Yufei Gu commented on YARN-6757:


Uploaded a new patch. 
1. fixed. 
2. {{CGroupsHandler.getValidCGroups()}} should be OK since EnumSet keep the 
enum instead of its name string. It's easier for callers to use name string 
HashSet.
3. 4. 5. 6. fixed.
7. Yes, it's weird, but we cannot do anything about it.
8. Fixed partially. Need the offline discussion, then I can post a new patch.


> Refactor the usage of 
> yarn.nodemanager.linux-container-executor.cgroups.mount-path
> --
>
> Key: YARN-6757
> URL: https://issues.apache.org/jira/browse/YARN-6757
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-6757.000.patch, YARN-6757.001.patch
>
>
> We should add the ability to specify a custom cgroup path. This is how the 
> documentation of {{linux-container-executor.cgroups.mount-path}} would look 
> like:
> {noformat}
> Requested cgroup mount path. Yarn has built in functionality to discover
> the system cgroup mount paths, so use this setting only, if the discovery 
> does not work.
> This path must exist before the NodeManager is launched.
> The location can vary depending on the Linux distribution in use.
> Common locations include /sys/fs/cgroup and /cgroup.
> If cgroups are not mounted, set 
> yarn.nodemanager.linux-container-executor.cgroups.mount
> to true. In this case it specifies, where the LCE should attempt to mount 
> cgroups if not found.
> If cgroups is accessible through lxcfs or some other file system,
> then set this path and 
> yarn.nodemanager.linux-container-executor.cgroups.mount to false.
> Yarn tries to use this path first, before any cgroup mount point 
> discovery.
> If it cannot find this directory, it falls back to searching for cgroup 
> mount points in the system.
> Only used when the LCE resources handler is set to the 
> CgroupsLCEResourcesHandler
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-07-06 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6757:
---
Attachment: YARN-6757.001.patch

> Refactor the usage of 
> yarn.nodemanager.linux-container-executor.cgroups.mount-path
> --
>
> Key: YARN-6757
> URL: https://issues.apache.org/jira/browse/YARN-6757
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-6757.000.patch, YARN-6757.001.patch
>
>
> We should add the ability to specify a custom cgroup path. This is how the 
> documentation of {{linux-container-executor.cgroups.mount-path}} would look 
> like:
> {noformat}
> Requested cgroup mount path. Yarn has built in functionality to discover
> the system cgroup mount paths, so use this setting only, if the discovery 
> does not work.
> This path must exist before the NodeManager is launched.
> The location can vary depending on the Linux distribution in use.
> Common locations include /sys/fs/cgroup and /cgroup.
> If cgroups are not mounted, set 
> yarn.nodemanager.linux-container-executor.cgroups.mount
> to true. In this case it specifies, where the LCE should attempt to mount 
> cgroups if not found.
> If cgroups is accessible through lxcfs or some other file system,
> then set this path and 
> yarn.nodemanager.linux-container-executor.cgroups.mount to false.
> Yarn tries to use this path first, before any cgroup mount point 
> discovery.
> If it cannot find this directory, it falls back to searching for cgroup 
> mount points in the system.
> Only used when the LCE resources handler is set to the 
> CgroupsLCEResourcesHandler
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077313#comment-16077313
 ] 

Hadoop QA commented on YARN-6768:
-

| (/) *{color:green} 1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green} 1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green} 15m 
17s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green} 11m  
7s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green} 11m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 12 new   32 unchanged 
- 7 fixed = 44 total (was 39) {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green}  7m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green} 1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6768 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875984/YARN-6768.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dae3a2857415 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7576a68 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16314/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16314/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16314/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: 

[jira] [Updated] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-07-06 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-4266:
--
Attachment: YARN-4266.002.patch

[~shaneku...@gmail.com], here is the patch I've been working on. It's based off 
of [~tangzhankun]'s initial patch. It allows a user to specify for them to use 
their UID:GID pairing instead of their username when entering a container. 
Additionally, it uses --group-add to add the rest of their groups. Review and 
comments would be much appreciated!

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-4266.001.patch, YARN-4266.001.patch, 
> YARN-4266.002.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-06 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Attachment: YARN-5892.branch-2.015.patch

[~sunilg], [~leftnoteasy], [~jlowe]:
Since branch-2 and 2.8 are somewhat different than trunk, it was necessary to 
make some design decisions that I would like you to be aware of when reviewing 
this backport:
- As noted 
[here|https://issues.apache.org/jira/browse/YARN-2113?focusedCommentId=16023111=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16023111],
 I did not backport YARN-5889 because it depends on locking changes from 
YARN-3140 and other locking JIRAs.
- In trunk, a change was made in YARN-5889 that changed the way 
{{computeUserLimit}} calculates user limit. In branch-2 and branch-2.8, 
{{userLimitResource = (all used resources in queue) / (num active users in 
queue)}}. In trunk after YARN-5889, {{userLimitResource = (all used resources 
by active users in queue) / (num active users)}}.
-- Since branch-2 and 2.8 use {{all used resources by active users in queue}} 
instead of {{all used resources in queue}}, it is not necessary to modify 
{{LeafQueue}} to keep track of when resources are activated and deactivated 
like was done in {{UsersManager}} in trunk.
-- However, I did add the activeUsersSet to LeafQueue and all the places it is 
modified so it can be used to sum active users times weight.
-- Therefore, it wasn't necessary to create a separate UsersManager class as 
was done in YARN-5889. Instead, I added a small amount of code in 
ActiveUsersManager to keep track of active users and to indicate when users are 
either activated or deactivated.
- {{LeafQueue#sumActiveUsersTimesWeights}} should not do anything that 
synchronizes or locks. This is to avoid deadlocks because it is called by 
getHeadRoom (indirectly), which is called by {{FiCaSchedulerApp}}.
{code}
  float sumActiveUsersTimesWeights() {
float count = 0.0f;
for (String userName : activeUsersSet) {
  User user = users.get(userName);
  count += (user != null) ? user.getWeight() : 1.0f;
}
return count;
  }
{code}
-- This opens up a race condition for when a user is added or removed from 
{{activeUsersSet}} while {{sumActiveUsersTimesWeights}} is iterating over the 
set.
--- I'm not an expert in Java syncronization. Does this expose {{LeafQueue}} to 
concurrent modification exceptions?
--- There is no {{ConcurrentHashSet}} so should I make {{activeUsersSet}} a 
{{ConcurrentHashMap}}?


> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-alpha4
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-06 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reopened YARN-5892:
--

Re-opening in order to backport to branch-2 and 2.8.

> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-alpha4
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6355) Preprocessor framework for AM and Client interactions with the RM

2017-07-06 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6355:
--
Attachment: YARN-6355.004.patch

Further cleanup and test case fixes.

> Preprocessor framework for AM and Client interactions with the RM
> -
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: amrmproxy, resourcemanager
> Attachments: YARN-6355.001.patch, YARN-6355.002.patch, 
> YARN-6355.003.patch, YARN-6355.004.patch, YARN-6355-one-pager.pdf
>
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-06 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-6768:
--
Attachment: YARN-6768.2.patch

> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6768.1.patch, YARN-6768.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6704) Add Federation Interceptor restart when work preserving NM is enabled

2017-07-06 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6704:
---
Attachment: YARN-6704-YARN-2915.v1.patch

> Add Federation Interceptor restart when work preserving NM is enabled
> -
>
> Key: YARN-6704
> URL: https://issues.apache.org/jira/browse/YARN-6704
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
> Attachments: YARN-6704-YARN-2915.v1.patch
>
>
> YARN-1336 added the ability to restart NM without loosing any running 
> containers. {{AMRMProxy}} restart is added in YARN-6127. In a Federated YARN 
> environment, there's additional state in the {{FederationInterceptor}} to 
> allow for spanning across multiple sub-clusters, so we need to enhance 
> {{FederationInterceptor}} to support work-preserving restart.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-07-06 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077190#comment-16077190
 ] 

Jian He commented on YARN-2919:
---

bq. assume RenewalTimerTask. run is invoked by the timer task and it has passed 
through the if (!dttr.isTimerCancelled()) but before 
renewToken(DelegationTokenToRenew) or in between renewToken & 
setTimerForTokenRenewal is invoked, removeApplicationFromRenewal gets 
triggered. In which case unnecessary renew or timer task scheduling happens

I think there's always such situation regardless whether the flag is inside the 
method (done by this patch) or outside the method ( the cancelled flag which 
exists today). e.g. In current patch, if the renew Method has passed the {{if 
(!cancelling.get()) {}} check, and before renew invokes, the cancel is 
executed, then the renewToken will also be executed ? Unless we completely 
synchronize the cancel and renew at client side which will bring more overhead, 
it is possible for a renew happen immediately after a cancel. 

On the other hand, it is possible, but rare, for the mentioned scenario to 
occur because {{dttr.cancelTimer();}} will cancel the timer and the followup 
cancelToken(dttr) doesnot cancel the token immediately, but rather enqueues the 
cancel the task which should have more room for the other parallel executing 
renewal thread to run into completion.

Anyway, we could try to synchronize both operations completely to avoid this 
rare scenario but at the cost of more overhead,  or we have a good fallback 
code to handle this failed scenario as it does today. (btw, do you know if this 
issue occur in reality or it's a observation by code?) IMHO, the current 
existing way maybe fine..



> Potential race between renew and cancel in DelegationTokenRenwer 
> -
>
> Key: YARN-2919
> URL: https://issues.apache.org/jira/browse/YARN-2919
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2919.002.patch, YARN-2919.003.patch, 
> YARN-2919.004.patch, YARN-2919.005.patch, YARN-2919.20141209-1.patch
>
>
> YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a 
> race because of which a renewal in flight isn't interrupted by a cancel. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-06 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-6768:
--
Attachment: YARN-6768.1.patch

> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6768.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-06 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created YARN-6768:
-

 Summary: Improve performance of yarn api record toString and 
fromString
 Key: YARN-6768
 URL: https://issues.apache.org/jira/browse/YARN-6768
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6033) Add support for sections in container-executor configuration file

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077103#comment-16077103
 ] 

Hadoop QA commented on YARN-6033:
-

| (/) *{color:green} 1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green} 1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 16m 
24s{color} | {color:green} YARN-5673 passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} YARN-5673 passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} YARN-5673 passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} YARN-5673 passed {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} cc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green} 1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} unit {color} | {color:green} 13m 
53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green} 1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6033 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845506/YARN-6033-YARN-5673.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux ecb08224900a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5673 / e49e0a6 |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16313/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16313/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-07-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077085#comment-16077085
 ] 

Naganarasimha G R commented on YARN-2919:
-

[~jianhe], 
May be my understanding is wrong and correct me if i am so. I went more based 
on the description which is there in MAPREDUCE-5384 which says the two possible 
race scenarios. but though one is correct not completely sure about the second 
one.
{quote}
Races to fix:
# TimerTask#cancel() disallows future invocations of run(), but doesn't abort 
an already scheduled/started run().
# In the context of DelegationTokenRenewal, RenewalTimerTask#cancel() only 
cancels that TimerTask instance. However, it has no effect on any other 
TimerTasks created for that token.
Attach one or more files to this issue
{quote}
assume {{RenewalTimerTask. run}} is invoked by the timer task and it has passed 
through the {{if (!dttr.isTimerCancelled())}} but before 
{{renewToken(DelegationTokenToRenew)}} or in between {{renewToken}} & 
{{setTimerForTokenRenewal}} is invoked, {{removeApplicationFromRenewal}} gets 
triggered. In which case unnecessary renew or timer task scheduling happens
IIUC this is what even Sid meant by ??The in-process renew can check these till 
the last moment before invoking the actual renew, and subsequent renewals will 
not attempt a renew (maybe even not schedule a renew)?? in his 
[comment|https://issues.apache.org/jira/browse/MAPREDUCE-5384?focusedCommentId=13745421=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13745421]
 in MAPREDUCE-5384.


> Potential race between renew and cancel in DelegationTokenRenwer 
> -
>
> Key: YARN-2919
> URL: https://issues.apache.org/jira/browse/YARN-2919
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2919.002.patch, YARN-2919.003.patch, 
> YARN-2919.004.patch, YARN-2919.005.patch, YARN-2919.20141209-1.patch
>
>
> YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a 
> race because of which a renewal in flight isn't interrupted by a cancel. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6761) Fix build for YARN-3926 branch

2017-07-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077079#comment-16077079
 ] 

Wangda Tan commented on YARN-6761:
--

make sense to me, but the performance fix should be blocker of branch merge.

> Fix build for YARN-3926 branch
> --
>
> Key: YARN-6761
> URL: https://issues.apache.org/jira/browse/YARN-6761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6761-YARN-3926.001.patch
>
>
> After rebasing to trunk, due to the addition of YARN-6679, compilation of the 
> YARN-3926 branch is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-06 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077077#comment-16077077
 ] 

Abdullah Yousufi commented on YARN-5146:


So due to Ember design, each model required its own serializer and adapter, 
which Ember uses to execute the call to the API and load the data to the model, 
which results in separate API calls.

However, I don't believe that hovering over different queues in the queue 
navigator causes API calls. Currently, that logic seems to only happen on load 
and refresh.

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6761) Fix build for YARN-3926 branch

2017-07-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077040#comment-16077040
 ] 

Sunil G edited comment on YARN-6761 at 7/6/17 6:40 PM:
---

Thanks [~leftnoteasy] for the comments. To improve performance in resource 
profile, I have done some more optimization. So the suggestions give here , I 
can take care of that in perf patch. In this case, we could unblock the build 
break with current patch. As discussed offline, do you feel if this is fine.


was (Author: sunilg):
Thanks [~leftnoteasy] for the comments. To improve performance in resource 
profile, I have done some more optimization. So the suggestions give here , I 
can take care of that in perf patch. In this case, we could unblock the build 
break with current patch. Thoughts? 

> Fix build for YARN-3926 branch
> --
>
> Key: YARN-6761
> URL: https://issues.apache.org/jira/browse/YARN-6761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6761-YARN-3926.001.patch
>
>
> After rebasing to trunk, due to the addition of YARN-6679, compilation of the 
> YARN-3926 branch is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6761) Fix build for YARN-3926 branch

2017-07-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077040#comment-16077040
 ] 

Sunil G commented on YARN-6761:
---

Thanks [~leftnoteasy] for the comments. To improve performance in resource 
profile, I have done some more optimization. So the suggestions give here , I 
can take care of that in perf patch. In this case, we could unblock the build 
break with current patch. Thoughts? 

> Fix build for YARN-3926 branch
> --
>
> Key: YARN-6761
> URL: https://issues.apache.org/jira/browse/YARN-6761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6761-YARN-3926.001.patch
>
>
> After rebasing to trunk, due to the addition of YARN-6679, compilation of the 
> YARN-3926 branch is broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6033) Add support for sections in container-executor configuration file

2017-07-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077029#comment-16077029
 ] 

Wangda Tan commented on YARN-6033:
--

Thanks [~vvasudev] for the patch. I just took a look at the patch, in general 
it is in good shape, in addition to the configuration section itself, it also 
brings gtest framework, which will be much easier to add tests in the future.

Some comments:
1) It's better to move {{struct section executor_cfg = {.size=0, 
.sectiondetails=NULL};}} and {{struct configuration CFG = {.size=0, 
.sections=NULL};}} from container-executor.c to configurations.c. And add 
getter/setter method to configuration.h. I think we should not couple life 
cycle of configuration and container-executor since we could add other modules 
beyond container-executor in the new design.
2) some rename suggestions: 
- sectionentry: is it better to call {{kv_pair}}? 
- sectiondetails: if you agree with above, how about rename it to {{kv_pairs}}?

 [~sunilg].

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6033) Add support for sections in container-executor configuration file

2017-07-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077029#comment-16077029
 ] 

Wangda Tan edited comment on YARN-6033 at 7/6/17 6:31 PM:
--

Thanks [~vvasudev] for the patch. I just took a look at the patch, in general 
it is in good shape, in addition to the configuration section itself, it also 
brings gtest framework, which will be much easier to add tests in the future.

Some comments:
1) It's better to move 
{code}
struct section executor_cfg = {.size=0, .sectiondetails=NULL};
{code} and 
{code}
struct configuration CFG = {.size=0, .sections=NULL};
{code}
>From container-executor.c to configurations.c. And add getter/setter method to 
>configuration.h. I think we should not couple life cycle of configuration and 
>container-executor since we could add other modules beyond container-executor 
>in the new design.
2) some rename suggestions: 
- sectionentry: is it better to call {{kv_pair}}? 
- sectiondetails: if you agree with above, how about rename it to {{kv_pairs}}?

 [~sunilg].


was (Author: leftnoteasy):
Thanks [~vvasudev] for the patch. I just took a look at the patch, in general 
it is in good shape, in addition to the configuration section itself, it also 
brings gtest framework, which will be much easier to add tests in the future.

Some comments:
1) It's better to move {{struct section executor_cfg = {.size=0, 
.sectiondetails=NULL};}} and {{struct configuration CFG = {.size=0, 
.sections=NULL};}} from container-executor.c to configurations.c. And add 
getter/setter method to configuration.h. I think we should not couple life 
cycle of configuration and container-executor since we could add other modules 
beyond container-executor in the new design.
2) some rename suggestions: 
- sectionentry: is it better to call {{kv_pair}}? 
- sectiondetails: if you agree with above, how about rename it to {{kv_pairs}}?

 [~sunilg].

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-07-06 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077028#comment-16077028
 ] 

Jian He commented on YARN-2919:
---

bq.  This race is about stopping creation of the timer task or trying to renew 
the token if cancel is already invoked on it.
I don't see where the race is based on latest code. 
"removeApplicationFromRenewal" method is where the cancel happens. Inside the 
method, it always call "dttr.cancelTimer();" first before cancelling the actual 
token.  could you explain a bit more where the race is?  

> Potential race between renew and cancel in DelegationTokenRenwer 
> -
>
> Key: YARN-2919
> URL: https://issues.apache.org/jira/browse/YARN-2919
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2919.002.patch, YARN-2919.003.patch, 
> YARN-2919.004.patch, YARN-2919.005.patch, YARN-2919.20141209-1.patch
>
>
> YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a 
> race because of which a renewal in flight isn't interrupted by a cancel. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6654) RollingLevelDBTimelineStore backwards incompatible after fst upgrade

2017-07-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077007#comment-16077007
 ] 

Junping Du commented on YARN-6654:
--

2.8.1 will be a security release. Let's move to 2.8.2.

> RollingLevelDBTimelineStore backwards incompatible after fst upgrade
> 
>
> Key: YARN-6654
> URL: https://issues.apache.org/jira/browse/YARN-6654
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: YARN-6654.1.patch, YARN-6654.2.patch, YARN-6654.3.patch
>
>
> There is a small minor backwards compatible change introduced while upgrading 
> fst library from 2.24 to 2.50.
> {code}
> Exception in thread "main" java.io.IOException: java.lang.RuntimeException: 
> unable to find class for code 83
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:243)
>   at 
> org.nustaq.serialization.FSTConfiguration.asObject(FSTConfiguration.java:1125)
>   at org.nustaq.serialization.FSTNoJackson.main(FSTNoJackson.java:31)
> Caused by: java.lang.RuntimeException: unable to find class for code 83
>   at 
> org.nustaq.serialization.FSTClazzNameRegistry.decodeClass(FSTClazzNameRegistry.java:180)
>   at 
> org.nustaq.serialization.coders.FSTStreamDecoder.readClass(FSTStreamDecoder.java:472)
>   at 
> org.nustaq.serialization.FSTObjectInput.readClass(FSTObjectInput.java:933)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:343)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.serializers.FSTArrayListSerializer.instantiate(FSTArrayListSerializer.java:63)
>   at 
> org.nustaq.serialization.FSTObjectInput.instantiateAndReadWithSer(FSTObjectInput.java:497)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:366)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.serializers.FSTMapSerializer.instantiate(FSTMapSerializer.java:78)
>   at 
> org.nustaq.serialization.FSTObjectInput.instantiateAndReadWithSer(FSTObjectInput.java:497)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:366)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:307)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:241)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6654) RollingLevelDBTimelineStore backwards incompatible after fst upgrade

2017-07-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6654:
-
Target Version/s: 2.8.2  (was: 2.8.1)

> RollingLevelDBTimelineStore backwards incompatible after fst upgrade
> 
>
> Key: YARN-6654
> URL: https://issues.apache.org/jira/browse/YARN-6654
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: YARN-6654.1.patch, YARN-6654.2.patch, YARN-6654.3.patch
>
>
> There is a small minor backwards compatible change introduced while upgrading 
> fst library from 2.24 to 2.50.
> {code}
> Exception in thread "main" java.io.IOException: java.lang.RuntimeException: 
> unable to find class for code 83
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:243)
>   at 
> org.nustaq.serialization.FSTConfiguration.asObject(FSTConfiguration.java:1125)
>   at org.nustaq.serialization.FSTNoJackson.main(FSTNoJackson.java:31)
> Caused by: java.lang.RuntimeException: unable to find class for code 83
>   at 
> org.nustaq.serialization.FSTClazzNameRegistry.decodeClass(FSTClazzNameRegistry.java:180)
>   at 
> org.nustaq.serialization.coders.FSTStreamDecoder.readClass(FSTStreamDecoder.java:472)
>   at 
> org.nustaq.serialization.FSTObjectInput.readClass(FSTObjectInput.java:933)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:343)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.serializers.FSTArrayListSerializer.instantiate(FSTArrayListSerializer.java:63)
>   at 
> org.nustaq.serialization.FSTObjectInput.instantiateAndReadWithSer(FSTObjectInput.java:497)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:366)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.serializers.FSTMapSerializer.instantiate(FSTMapSerializer.java:78)
>   at 
> org.nustaq.serialization.FSTObjectInput.instantiateAndReadWithSer(FSTObjectInput.java:497)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:366)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:307)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:241)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-07-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077001#comment-16077001
 ] 

Junping Du commented on YARN-2919:
--

Thanks for additional review, [~jianhe]!
bq. there are some behavior changes, the return value of renew method was 
supposed to be the expiration time, now '-1' is returned as an error code, 
which old program does not understand - old program was expecting an exception 
if renew fails.
Oh. I could miss that. "-1" is something dangerous as caller will use this 
value as expire time, etc., just like: DelegationTokenRenewer.renew() or 
DelegationTokenFetcher.renewTokens(). I think we should throw IOException 
instead given the behavior should be failed to renew (due to token get 
cancelling), and also most callers should handle IOException explicitly for 
renew failure cases.

> Potential race between renew and cancel in DelegationTokenRenwer 
> -
>
> Key: YARN-2919
> URL: https://issues.apache.org/jira/browse/YARN-2919
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2919.002.patch, YARN-2919.003.patch, 
> YARN-2919.004.patch, YARN-2919.005.patch, YARN-2919.20141209-1.patch
>
>
> YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a 
> race because of which a renewal in flight isn't interrupted by a cancel. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076958#comment-16076958
 ] 

Hadoop QA commented on YARN-6746:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green} 1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green} 13m 
13s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green} 1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green} 1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green} 1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green} 1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6746 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875951/YARN-6746.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 127b0488b3dc 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7576a68 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16312/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16312/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16312/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
> URL: https://issues.apache.org/jira/browse/YARN-6746
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects 

[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-06 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076933#comment-16076933
 ] 

Akhil PB commented on YARN-5146:


Hi [~ayousufi]
I suppose three models hit same scheduler endpoint three times. Is it possible 
to make it hit once by caching or something?
Because, from what I recall, each time you hover over the queue, scheduler APIs 
are fired.

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6767) Timeline client won't be able to write when TimelineCollector is not up yet, or NM is down

2017-07-06 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6767:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-5355

> Timeline client won't be able to write when TimelineCollector is not up yet, 
> or NM is down
> --
>
> Key: YARN-6767
> URL: https://issues.apache.org/jira/browse/YARN-6767
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient
>Affects Versions: 3.0.0-alpha4
>Reporter: Haibo Chen
>
> As discussed in the call, when an application first starts to run, its 
> corresponding TimelineCollector instance may not be up yet, or if the 
> TimelineCollector goes down when node manager dies (TimelineCollector now 
> runs as part of NM auxiliary services), the timeline client
> will not able to write entities. We need to address or mitigate the issue if 
> possible, or at least call it out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6767) Timeline client won't be able to write when TimelineCollector is not up yet, or NM is down

2017-07-06 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6767:
-
Description: 
As discussed in the call, when an application first starts to run, its 
corresponding TimelineCollector instance may not be up yet, or if the 
TimelineCollector goes down when node manager dies (TimelineCollector now runs 
as part of NM auxiliary services), the timeline client
will not able to write entities. We need to address or mitigate the issue if 
possible, or at least call it out.

> Timeline client won't be able to write when TimelineCollector is not up yet, 
> or NM is down
> --
>
> Key: YARN-6767
> URL: https://issues.apache.org/jira/browse/YARN-6767
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineclient
>Affects Versions: 3.0.0-alpha4
>Reporter: Haibo Chen
>
> As discussed in the call, when an application first starts to run, its 
> corresponding TimelineCollector instance may not be up yet, or if the 
> TimelineCollector goes down when node manager dies (TimelineCollector now 
> runs as part of NM auxiliary services), the timeline client
> will not able to write entities. We need to address or mitigate the issue if 
> possible, or at least call it out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6767) Timeline client won't be able to write when TimelineCollector is not up yet, or NM is down

2017-07-06 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-6767:


 Summary: Timeline client won't be able to write when 
TimelineCollector is not up yet, or NM is down
 Key: YARN-6767
 URL: https://issues.apache.org/jira/browse/YARN-6767
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineclient
Affects Versions: 3.0.0-alpha4
Reporter: Haibo Chen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2919) Potential race between renew and cancel in DelegationTokenRenwer

2017-07-06 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076856#comment-16076856
 ] 

Naganarasimha G R commented on YARN-2919:
-

Thanks for the review [~jianhe], 
bq. could you elaborate what the race condition is based on code? The jira 
description is a bit vague.
Hmm ok i can reword it, but best way to explain would be based on the original 
[comment|https://issues.apache.org/jira/browse/MAPREDUCE-5384?focusedCommentId=13745421=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13745421]
 from Sid which was mentioned by [~kasha] in YARN-2874 . This race is about 
stopping creation of the timer task or trying to renew the token if cancel is 
already invoked on it.

bq. Caller can just make a copy of the token and do all sorts of operation and 
this flag becomes moot.
Well agree that if there is a copy made there nothing much which can be done 
but IIUC its not the general usage to clone it and it would be useful to avoid 
unnecessary invoke on the renew or creation of timer task given that these are 
multi threaded in nature. In my initial approach i had created the flag in the 
RM's {{DelegationTokenToRenew}}, which i think was rightfully corrected by 
[~kasha] to be put in Token.

bq. Also, there are some behavior changes, the return value of renew method was 
supposed to be the expiration time, now '-1' is returned as an error code, 
which old program does not understand - old program was expecting an exception 
if renew fails. And it's possible for old program to wrongly interprets the 
'-1' as the expiration time.
Agree , but the reason i did was api was not clearly mentioning that if its 
cancelled then exception will be thrown and it would be left to the renewer 
(interface) so its implementation might want to differ but on other hand if 
expiration is less than before renew check for requesting new token passes. So 
i thought its safer, but if you feel its better to throw exception then would 
do so but if so then just throw IO Exception ? what if renewer was throwing 
some subclass?



> Potential race between renew and cancel in DelegationTokenRenwer 
> -
>
> Key: YARN-2919
> URL: https://issues.apache.org/jira/browse/YARN-2919
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Naganarasimha G R
>Priority: Critical
> Attachments: YARN-2919.002.patch, YARN-2919.003.patch, 
> YARN-2919.004.patch, YARN-2919.005.patch, YARN-2919.20141209-1.patch
>
>
> YARN-2874 fixes a deadlock in DelegationTokenRenewer, but there is still a 
> race because of which a renewal in flight isn't interrupted by a cancel. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-07-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076855#comment-16076855
 ] 

Daniel Templeton commented on YARN-6746:


LGTM  1.  I'll commit it later.

> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
> URL: https://issues.apache.org/jira/browse/YARN-6746
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Deepti Sawhney
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6746.001.patch, YARN-6746.001.patch, 
> YARN-6746.001.patch, YARN-6746.004.patch
>
>
> The function is unused.  It also appears to be broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-07-06 Thread Deepti Sawhney (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepti Sawhney updated YARN-6746:
-
Attachment: YARN-6746.004.patch

Patch created by "Deepti Sawhney" as per request.

> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
> URL: https://issues.apache.org/jira/browse/YARN-6746
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Deepti Sawhney
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6746.001.patch, YARN-6746.001.patch, 
> YARN-6746.001.patch, YARN-6746.004.patch
>
>
> The function is unused.  It also appears to be broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-06 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076844#comment-16076844
 ] 

Abdullah Yousufi commented on YARN-5146:


Thanks for the comments [~akhilpb].

We can definitely use the existing eq helper.

And because there are three separate models now for each scheduler, each model 
does hit the scheduler REST API endpoint from those two routes.

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-07-06 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076807#comment-16076807
 ] 

Eric Payne commented on YARN-2113:
--

Thanks Sunil. I committed to branch 2 and 2.8.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.3
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.8.0019.patch, YARN-2113.branch-2.8.0020.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6727) Improve getQueueUserAcls API to query for specific queue and user

2017-07-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076804#comment-16076804
 ] 

Sunil G commented on YARN-6727:
---

bq.+ queue level readlock
{code:title=AbstractCSQueue#hasAccess}
  public boolean hasAccess(QueueACL acl, UserGroupInformation user) {
return authorizer.checkPermission(
new AccessRequest(queueEntity, user, SchedulerUtils.toAccessType(acl),
null, null, Server.getRemoteAddress(), null));
}
{code}

We are currently invoking authorizer.checkPermission directly. So do we need 
queue readLock here?

bq.Submission time QUEUE_SUBMIT right we could cache but we need all 
Sorry. I could not get you. could you please help to elaborate.

bq.IIUC the refresh interval is about 5/10 min.We dont have direct update or 
notifier as of now.
cache invalidate is needed in cases where user's acls are changed in system. 
Hence it makes sense.

> Improve getQueueUserAcls API to query for  specific queue and user
> --
>
> Key: YARN-6727
> URL: https://issues.apache.org/jira/browse/YARN-6727
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6727.WIP.patch
>
>
> Currently {{ApplicationClientProtocol#getQueueUserAcls}} return data for all 
> the queues available in scheduler for user.
> User wants to know whether he has rights of a particular queue only. For 
> systems with 5K queues returning all queues list is not efficient.
> Suggested change: support additional parameters *userName and queueName* as 
> optional. Admin user should be able to query other users ACL for a particular 
> queueName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-07-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076769#comment-16076769
 ] 

Eric Badger commented on YARN-4266:
---

bq. Commented over on that jira. It would be good if we could get some traction 
over there as it looks like that patch is pretty close to being done.
Oops, must not have reloaded that tab in over a week. Didn't see [~luhuichun]'s 
response. 

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-4266.001.patch, YARN-4266.001.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2017-07-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076756#comment-16076756
 ] 

Eric Badger commented on YARN-4266:
---

[~shaneku...@gmail.com], sorry for the delay. I've been sidetracked a little 
bit. The patch that I have still has a few bugs, but I'd like to make sure that 
we agree on the approach before I move forward with cleaning it up and posting 
it here. 
bq. The intent to YARN-5534 is provide a mount white list, so I believe that 
should help here. The initial patch here could hard code the bind mount while 
we test and provide feedback. Hopefully we can leverage YARN-5534 before this 
is wrapped up.
Commented over on that jira. It would be good if we could get some traction 
over there as it looks like that patch is pretty close to being done.

bq. I don't think this is a requirement for the initial version. We could do a 
follow on effort to remove/reduce the need for the bind mounted socket for a 
known list of AMs, assuming the behavior can be changed in those AMs.
This is true. I'm attempting to do my due diligence up front to see if there is 
an avenue to get MRAppMaster to work without mounting /var/run/nscd. I've been 
talking with [~daryn] offline who has done lots of work on UGI stuff and we're 
looking into solutions. One solution that he suggested was going to back to our 
original idea of doing the adduser/usermod hack during container startup. I 
don't like this as much as it only allows you to put the one user in the 
container and will fail any other user lookups. It also would never get 
user/group updates which might be relevant for long-running containers. And on 
top of that, it would be unnecessary in the face of bind-mounting 
/var/run/nscd. However, it does get over the initial obstacle of not being able 
to run without bind-mounting /var/run/nscd. Preferably, we can find a way to 
make the --user switch work with MRAppMaster, but if not maybe this is the way 
to go. Thoughts?

> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-4266.001.patch, YARN-4266.001.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf, 
> YARN-4266-branch-2.8.001.patch
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-07-06 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076690#comment-16076690
 ] 

Eric Badger commented on YARN-5534:
---

Any update on this?

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-07-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076677#comment-16076677
 ] 

Sunil G commented on YARN-2113:
---

+1 for branch-2 and branch-2.8 patches. Thank you very much [~eepayne] for this 
effort, really appreciate the same. Pls help to commit the same. 

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha4
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.8.0019.patch, YARN-2113.branch-2.8.0020.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6766) *AppsBlock classes need a printOrNA() helper method

2017-07-06 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6766:
---
Issue Type: Improvement  (was: Bug)

> *AppsBlock classes need a printOrNA() helper method
> ---
>
> Key: YARN-6766
> URL: https://issues.apache.org/jira/browse/YARN-6766
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Priority: Minor
>  Labels: newbie
>
> The various {{*AppsBlock}} classes are riddled with statements like:
> {code}.append(appInfo.getReservedVCores() == -1 ? "N/A" : 
> String.valueOf(appInfo.getReservedVCores())){code}
> The code would be much cleaner if there were a utility method for that 
> operation, e.g.:
> {code}.append(printData(appInfo.getReservedCores())){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076662#comment-16076662
 ] 

Hudson commented on YARN-6708:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11971 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11971/])
YARN-6708. Nodemanager container crash after ext3 folder limit. (jlowe: rev 
7576a688ea84aed7206321b1f03594e43a5f216e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestContainerLocalizer.java


> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: YARN-6708
> URL: https://issues.apache.org/jira/browse/YARN-6708
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-6708.001.patch, YARN-6708.002.patch, 
> YARN-6708.003.patch, YARN-6708.004.patch, YARN-6708.005.patch, 
> YARN-6708.006.patch, YARN-6708.007.patch
>
>
> Configure umask as *027* for nodemanager service user
> and {{yarn.nodemanager.local-cache.max-files-per-directory}} as {{40}}. After 
> 4  *private* dir localization next directory will be *0/14*
> Local Directory cache manager 
> {code}
> vm2:/opt/hadoop/release/data/nmlocal/usercache/mapred/filecache # l
> total 28
> drwx--x--- 7 mapred hadoop 4096 Jun 10 14:35 ./
> drwxr-s--- 4 mapred hadoop 4096 Jun 10 12:07 ../
> drwxr-x--- 3 mapred users  4096 Jun 10 14:36 0/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:15 10/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:22 11/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:27 12/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:31 13/
> {code}
> *drwxr-x---* 3 mapred users  4096 Jun 10 14:36 0/ is only *750*
> Nodemanager user will not be able check for localization path exists or not.
> {{LocalResourcesTrackerImpl}}
> {code}
> case REQUEST:
>   if (rsrc != null && (!isResourcePresent(rsrc))) {
> LOG.info("Resource " + rsrc.getLocalPath()
> + " is missing, localizing it again");
> removeResource(req);
> rsrc = null;
>   }
>   if (null == rsrc) {
> rsrc = new LocalizedResource(req, dispatcher);
> localrsrc.put(req, rsrc);
>   }
>   break;
> {code}
> *isResourcePresent* will always return false and same resource will be 
> localized to {{0}} to next unique number



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6766) *AppsBlock classes need a printOrNA() helper method

2017-07-06 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6766:
--

 Summary: *AppsBlock classes need a printOrNA() helper method
 Key: YARN-6766
 URL: https://issues.apache.org/jira/browse/YARN-6766
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 3.0.0-alpha3, 2.8.1
Reporter: Daniel Templeton
Priority: Minor


The various {{*AppsBlock}} classes are riddled with statements like:

{code}.append(appInfo.getReservedVCores() == -1 ? "N/A" : 
String.valueOf(appInfo.getReservedVCores())){code}

The code would be much cleaner if there were a utility method for that 
operation, e.g.:

{code}.append(printData(appInfo.getReservedCores())){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6752) Display reserved resources in web UI per application

2017-07-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076650#comment-16076650
 ] 

Daniel Templeton commented on YARN-6752:


Yeah, that's an option.  Looking at it again, though, I don't see where there's 
a fix that's in scope for this patch.  If you combine columns, you should also 
do that for allocated resources.  Given that the new UI is coming soon, let's 
wait for someone to actually complain. :)

+1

> Display reserved resources in web UI per application
> 
>
> Key: YARN-6752
> URL: https://issues.apache.org/jira/browse/YARN-6752
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-6752.001.patch, YARN-6752.002.patch, 
> YARN-6752.003.patch
>
>
> Show the number of reserved memory and vcores for each application



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6410) FSContext.scheduler should be final

2017-07-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076634#comment-16076634
 ] 

Daniel Templeton commented on YARN-6410:


Thanks, [~Cyl].  LGTM +1.  I'll commit it later.

> FSContext.scheduler should be final
> ---
>
> Key: YARN-6410
> URL: https://issues.apache.org/jira/browse/YARN-6410
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6410-001.patch
>
>
> {code}
>   private FairScheduler scheduler;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6410) FSContext.scheduler should be final

2017-07-06 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-6410:
--

Assignee: Yeliang Cang  (was: Daniel Templeton)

> FSContext.scheduler should be final
> ---
>
> Key: YARN-6410
> URL: https://issues.apache.org/jira/browse/YARN-6410
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6410-001.patch
>
>
> {code}
>   private FairScheduler scheduler;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076631#comment-16076631
 ] 

Daniel Templeton commented on YARN-6765:


Thanks for the patch, [~Cyl] and for being proactive about the checkstyle 
warnings.  Unfortunately, checkstyle is wrong about the loops.  You cannot 
remove an element while iterating.  See 
https://stackoverflow.com/questions/2847555/when-removing-inside-foreach-do-we-need-to-step-back.
  You'll need to revert those changes.

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch, YARN-6765-002.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-07-06 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076614#comment-16076614
 ] 

Jason Lowe commented on YARN-6708:
--

+1 latest patch lgtm.  Committing this.

> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: YARN-6708
> URL: https://issues.apache.org/jira/browse/YARN-6708
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6708.001.patch, YARN-6708.002.patch, 
> YARN-6708.003.patch, YARN-6708.004.patch, YARN-6708.005.patch, 
> YARN-6708.006.patch, YARN-6708.007.patch
>
>
> Configure umask as *027* for nodemanager service user
> and {{yarn.nodemanager.local-cache.max-files-per-directory}} as {{40}}. After 
> 4  *private* dir localization next directory will be *0/14*
> Local Directory cache manager 
> {code}
> vm2:/opt/hadoop/release/data/nmlocal/usercache/mapred/filecache # l
> total 28
> drwx--x--- 7 mapred hadoop 4096 Jun 10 14:35 ./
> drwxr-s--- 4 mapred hadoop 4096 Jun 10 12:07 ../
> drwxr-x--- 3 mapred users  4096 Jun 10 14:36 0/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:15 10/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:22 11/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:27 12/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:31 13/
> {code}
> *drwxr-x---* 3 mapred users  4096 Jun 10 14:36 0/ is only *750*
> Nodemanager user will not be able check for localization path exists or not.
> {{LocalResourcesTrackerImpl}}
> {code}
> case REQUEST:
>   if (rsrc != null && (!isResourcePresent(rsrc))) {
> LOG.info("Resource " + rsrc.getLocalPath()
> + " is missing, localizing it again");
> removeResource(req);
> rsrc = null;
>   }
>   if (null == rsrc) {
> rsrc = new LocalizedResource(req, dispatcher);
> localrsrc.put(req, rsrc);
>   }
>   break;
> {code}
> *isResourcePresent* will always return false and same resource will be 
> localized to {{0}} to next unique number



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6410) FSContext.scheduler should be final

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076546#comment-16076546
 ] 

Hadoop QA commented on YARN-6410:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6410 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875922/YARN-6410-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 021d41bc01a3 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 946dd25 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16311/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16311/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16311/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FSContext.scheduler should be final
> ---
>
> Key: YARN-6410
> URL: https://issues.apache.org/jira/browse/YARN-6410
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>  

[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076521#comment-16076521
 ] 

Hadoop QA commented on YARN-6130:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
24s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
2s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
46s{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app in 
YARN-5355 has 3 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client in 
YARN-5355 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-5355 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-5355 has 5 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-5355 has 8 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
46s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 5 new + 507 unchanged 
- 3 fixed = 512 total (was 510) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 29s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 

[jira] [Commented] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076470#comment-16076470
 ] 

Hadoop QA commented on YARN-6765:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 79 unchanged - 0 fixed = 82 total (was 79) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 2 unchanged - 3 fixed = 3 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 56s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  
java.util.Map$Entry
 is incompatible with expected argument type 
org.apache.hadoop.yarn.api.records.LocalResource in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
  At ContainerLocalizer.java:argument type 
org.apache.hadoop.yarn.api.records.LocalResource in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
  At ContainerLocalizer.java:[line 351] |
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.localizer.TestContainerLocalizer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875919/YARN-6765-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 54f347eee1fa 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 946dd25 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Commented] (YARN-6410) FSContext.scheduler should be final

2017-07-06 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076454#comment-16076454
 ] 

Yeliang Cang commented on YARN-6410:


Hi, [~templedf], I have submit a patch as you suggest! please check it out, 
thank you!

> FSContext.scheduler should be final
> ---
>
> Key: YARN-6410
> URL: https://issues.apache.org/jira/browse/YARN-6410
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6410-001.patch
>
>
> {code}
>   private FairScheduler scheduler;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6410) FSContext.scheduler should be final

2017-07-06 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6410:
---
Attachment: YARN-6410-001.patch

> FSContext.scheduler should be final
> ---
>
> Key: YARN-6410
> URL: https://issues.apache.org/jira/browse/YARN-6410
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6410-001.patch
>
>
> {code}
>   private FairScheduler scheduler;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076432#comment-16076432
 ] 

Yeliang Cang commented on YARN-6765:


patch002, fix one checkstyle and three findbug warnings.

Two findbugs remain unchanged:
Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
The "/sys/fs/cgroup" path is OS cgroup path, and cannot be modified.

Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
I do not think it is suitable to remove variable removedNullContainers. Maybe 
it can be used in the future work.

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch, YARN-6765-002.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6765:
---
Attachment: YARN-6765-002.patch

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch, YARN-6765-002.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-06 Thread Akhil PB (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076421#comment-16076421
 ] 

Akhil PB commented on YARN-5146:


Hi [~ayousufi],
It seems that we already have an {{eq}} helper as part of 
{{ember-truth-helpers}} package. I suppose we could remove newly created eq 
helper.
Is there multiple scheduler REST APIs fired in {{routes/yarn-queue.js}} and 
{{routes/yarn-queues.js}} files?


> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6763) TestProcfsBasedProcessTree#testProcessTree fails in trunk

2017-07-06 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076381#comment-16076381
 ] 

Bibin A Chundatt commented on YARN-6763:


{noformat}
root@bibinpc:/proc/1452# ps -ef | grep systemd
root   256 1  0 Jul04 ?00:00:03 /lib/systemd/systemd-journald
root   289 1  0 Jul04 ?00:00:00 /lib/systemd/systemd-udevd
root   942 1  0 Jul04 ?00:00:00 /lib/systemd/systemd-logind
root   954 1  0 Jul04 ?00:00:00 /sbin/cgmanager -m name=systemd
message+   963 1  0 Jul04 ?00:04:10 /usr/bin/dbus-daemon --system 
--address=systemd: --nofork --nopidfile --systemd-activation
systemd+  1066 1  0 Jul04 ?00:00:01 /lib/systemd/systemd-resolved
bibin 1452 1  0 Jul04 ?00:00:00 /lib/systemd/systemd --user
bibin 1478  1452  0 Jul04 ?00:00:57 /usr/bin/dbus-daemon --session 
--address=systemd: --nofork --nopidfile --systemd-activation
root  2450 1  0 Jul04 ?00:00:00 /lib/systemd/systemd --user
root  2459 27405  0 16:51 pts/700:00:00 grep --color=auto systemd
root  2475  2450  0 Jul04 ?00:00:00 /usr/bin/dbus-daemon --session 
--address=systemd: --nofork --nopidfile --systemd-activation
{noformat}
*Orphan process*  parent is not *1*
{noformat}
root@bibinpc:~# ps -ef | grep sleep
bibin 3342  1545  0 Jul05 ?00:00:00 sleep infinity
root 25169  1452  0 14:03 ?00:00:00 sleep 300
{noformat}

The orphan is not added to parent based on  *session ID*
{code}
if (!pID.equals("1")) {
  ProcessInfo pInfo = entry.getValue();
  String ppid = pInfo.getPpid();
  // If parent is init and process is not session leader,
  // attach to sessionID
  if (ppid.equals("1")) {
  String sid = pInfo.getSessionId().toString();
  if (!pID.equals(sid)) {
 ppid = sid;
  }
  }
{code}


> TestProcfsBasedProcessTree#testProcessTree fails in trunk
> -
>
> Key: YARN-6763
> URL: https://issues.apache.org/jira/browse/YARN-6763
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Priority: Minor
>
> {code}
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 7.949 sec <<< 
> FAILURE! - in org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree
> testProcessTree(org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree)  Time 
> elapsed: 7.119 sec  <<< FAILURE!
> java.lang.AssertionError: Child process owned by init escaped process tree.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.util.TestProcfsBasedProcessTree.testProcessTree(TestProcfsBasedProcessTree.java:184)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076375#comment-16076375
 ] 

Yeliang Cang commented on YARN-6765:


Will fix findbugs and checkstyle warnings later!

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6130:
---
Summary: [ATSv2 Security] Generate a delegation token for AM when app 
collector is created and pass it to AM via NM and RM  (was: [Security] Generate 
a delegation token for AM when app collector is created and pass it to AM via 
NM and RM)

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6130) [Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076305#comment-16076305
 ] 

Varun Saxena edited comment on YARN-6130 at 7/6/17 10:22 AM:
-

bq. One high level comment on the patch, To the AM, we are sending complete 
AppCollectorDataProto. This contains many unnecessary information which AM does 
not need it. I think it would be better to create another meta object inside 
AppCollectorDataProto which need to be sent to AM?
We can send the version info as it will make it easy for AM to update token 
only if it changes. But in future, we may add something to AppCollectorData 
which we may not need to send to AM. So the suggestion makes sense. So, will do 
it after QA report.


was (Author: varun_saxena):
bq. One high level comment on the patch, To the AM, we are sending complete 
AppCollectorDataProto. This contains many unnecessary information which AM does 
not need it. I think it would be better to create another meta object inside 
AppCollectorDataProto which need to be sent to AM?
Makes sense. Will do it after QA report.

> [Security] Generate a delegation token for AM when app collector is created 
> and pass it to AM via NM and RM
> ---
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076305#comment-16076305
 ] 

Varun Saxena commented on YARN-6130:


bq. One high level comment on the patch, To the AM, we are sending complete 
AppCollectorDataProto. This contains many unnecessary information which AM does 
not need it. I think it would be better to create another meta object inside 
AppCollectorDataProto which need to be sent to AM?
Makes sense. Will do it after QA report.

> [Security] Generate a delegation token for AM when app collector is created 
> and pass it to AM via NM and RM
> ---
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6130) [Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6130:
---
Attachment: YARN-6130-YARN-5355.02.patch

Rebased the patch.

> [Security] Generate a delegation token for AM when app collector is created 
> and pass it to AM via NM and RM
> ---
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076270#comment-16076270
 ] 

Hadoop QA commented on YARN-6765:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
53s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
55s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875887/YARN-6765-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cd4c178f89f3 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 946dd25 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16308/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16308/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16308/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16308/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Assigned] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang reassigned YARN-6765:
--

Assignee: Yeliang Cang

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Yeliang Cang
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076218#comment-16076218
 ] 

Yeliang Cang commented on YARN-6765:


Hi, [~templedf], I have submit a patch. Would you have a look? Thanks!

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6765) CGroupsHandlerImpl.initializeControllerPaths() should include cause when chaining exceptions

2017-07-06 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6765:
---
Attachment: YARN-6765-001.patch

> CGroupsHandlerImpl.initializeControllerPaths() should include cause when 
> chaining exceptions
> 
>
> Key: YARN-6765
> URL: https://issues.apache.org/jira/browse/YARN-6765
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6765-001.patch
>
>
> This: {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!");{code} should be this: 
> {code}  throw new ResourceHandlerException(
>   "Failed to initialize controller paths!", e);{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16076136#comment-16076136
 ] 

Hadoop QA commented on YARN-6708:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 53 unchanged - 1 fixed = 53 total (was 54) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
46s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6708 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875865/YARN-6708.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 83f5c5bceb4a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 946dd25 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16307/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16307/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16307/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: YARN-6708
> URL: https://issues.apache.org/jira/browse/YARN-6708
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: 

[jira] [Updated] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-07-06 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6708:
---
Attachment: YARN-6708.007.patch

Apologies for missing out comment.
Following changes are made in latest patch
# After added to test class to delete {{basedir}}
# Precreate directory and set wrong permission for {{USERCACHE}} folder
# {{USER_CACHE_FOLDER_PERMS}} renamed to {{USERCACHE_FOLDER_PERMS}}

> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: YARN-6708
> URL: https://issues.apache.org/jira/browse/YARN-6708
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6708.001.patch, YARN-6708.002.patch, 
> YARN-6708.003.patch, YARN-6708.004.patch, YARN-6708.005.patch, 
> YARN-6708.006.patch, YARN-6708.007.patch
>
>
> Configure umask as *027* for nodemanager service user
> and {{yarn.nodemanager.local-cache.max-files-per-directory}} as {{40}}. After 
> 4  *private* dir localization next directory will be *0/14*
> Local Directory cache manager 
> {code}
> vm2:/opt/hadoop/release/data/nmlocal/usercache/mapred/filecache # l
> total 28
> drwx--x--- 7 mapred hadoop 4096 Jun 10 14:35 ./
> drwxr-s--- 4 mapred hadoop 4096 Jun 10 12:07 ../
> drwxr-x--- 3 mapred users  4096 Jun 10 14:36 0/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:15 10/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:22 11/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:27 12/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:31 13/
> {code}
> *drwxr-x---* 3 mapred users  4096 Jun 10 14:36 0/ is only *750*
> Nodemanager user will not be able check for localization path exists or not.
> {{LocalResourcesTrackerImpl}}
> {code}
> case REQUEST:
>   if (rsrc != null && (!isResourcePresent(rsrc))) {
> LOG.info("Resource " + rsrc.getLocalPath()
> + " is missing, localizing it again");
> removeResource(req);
> rsrc = null;
>   }
>   if (null == rsrc) {
> rsrc = new LocalizedResource(req, dispatcher);
> localrsrc.put(req, rsrc);
>   }
>   break;
> {code}
> *isResourcePresent* will always return false and same resource will be 
> localized to {{0}} to next unique number



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org