[jira] [Commented] (YARN-4218) Metric for resource*time that was preempted

2016-10-31 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624458#comment-15624458
 ] 

Chang Li commented on YARN-4218:


[~eepayne] hmm, there are tons of errors for protocol javadocs for missing 
descriptions of why YarnException is thrown and why IOExceptions are thrown. My 
changes never touches those protocols not sure why it generates those errors 
for me. To fulfill those thousands of missing javadocs for exceptions or param 
probably worth a feature... Also I didn't implement those protocols it's hard 
for me to write correct description...

> Metric for resource*time that was preempted
> ---
>
> Key: YARN-4218
> URL: https://issues.apache.org/jira/browse/YARN-4218
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4218.2.patch, YARN-4218.2.patch, YARN-4218.2.patch, 
> YARN-4218.2.patch, YARN-4218.3.patch, YARN-4218.4.patch, YARN-4218.5.patch, 
> YARN-4218.branch-2.2.patch, YARN-4218.branch-2.patch, YARN-4218.patch, 
> YARN-4218.trunk.2.patch, YARN-4218.trunk.3.patch, YARN-4218.trunk.patch, 
> YARN-4218.wip.patch, screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> After YARN-415 we have the ability to track the resource*time footprint of a 
> job and preemption metrics shows how many containers were preempted on a job. 
> However we don't have a metric showing the resource*time footprint cost of 
> preemption. In other words, we know how many containers were preempted but we 
> don't have a good measure of how much work was lost as a result of preemption.
> We should add this metric so we can analyze how much work preemption is 
> costing on a grid and better track which jobs were heavily impacted by it. A 
> job that has 100 containers preempted that only lasted a minute each and were 
> very small is going to be less impacted than a job that only lost a single 
> container but that container was huge and had been running for 3 days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624448#comment-15624448
 ] 

Sunil G commented on YARN-5545:
---

Hi [~bibinchundatt] and [~Naganarasimha Garla]

Current approach in the patch looks fine. I also think that a cluster max check 
can be added to protect system from over shooting max-applications. I have not 
looked patch in detail, will do that today.

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5797) Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624364#comment-15624364
 ] 

Hadoop QA commented on YARN-5797:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 5 new + 311 unchanged - 6 fixed = 316 total (was 317) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
56s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5797 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836272/YARN-5797-trunk-v1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d6e695eefef3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7ba74be |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13713/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13713/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13713/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches
> --
>
>

[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-10-31 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: opp-container.png
all-nodes.png

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, all-nodes.png, opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-10-31 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: (was: all-nodes.png)

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-10-31 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: (was: opp-container.png)

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-10-31 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: opp-container.png
all-nodes.png

Attaching two screenshots. The one is from the nodes page, showing an instance 
of the cluster with both guaranteed and opportunistic containers running, as 
well as some additional containers queued at the node.
The second shows the details of a specific container, where the execution type 
is added ("OPPORTUNISTIC" in the specific case).

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, all-nodes.png, opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-10-31 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: YARN-2995.003.patch

Adding new version of the patch.
Fixed some more problems, the checkstyle issues, and added the execution type 
information at the container's page.
The unit test that was failing looks unrelated.

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5805) Add isDebugEnabled check in ContainersMonitorImpl for debug log

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624328#comment-15624328
 ] 

Hadoop QA commented on YARN-5805:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 228 unchanged - 3 fixed = 228 total (was 231) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5805 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836270/YARN-5805.0003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c77a653f4dec 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7ba74be |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13712/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13712/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add isDebugEnabled check in ContainersMonitorImpl for debug log  
> -
>
> Key: YARN-5805
> URL: https://issues.apache.org/jira/browse/YARN-5805
> 

[jira] [Comment Edited] (YARN-5797) Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches

2016-10-31 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624305#comment-15624305
 ] 

Chris Trezzo edited comment on YARN-5797 at 11/1/16 4:27 AM:
-

Attaching v1 patch to get a qa run. Summary:

# Added metrics to {{NodeManagerMetrics}} that will expose stats from 
{{LocalCacheCleanerStats}}.
# Adjusted {{ResourceLocalizationService}} constructor to take a 
{{NodeManagerMetrics}} param. Also adjusted {{handleCacheCleanup}} to update 
the new metrics.
# Adjusted {{TestLocalCacheCleanup}} to cover metrics as well.
# Refactored other unit tests to adjust for change in 
{{ResourceLocalizationService}} constructor.


was (Author: ctrezzo):
Attaching v1 patch to get a qa run. Summary:

# Added metrics that expose stats from {{LocalCacheCleanerStats}}.
# Adjusted {{TestLocalCacheCleanup}} to cover metrics as well.
# Refactored other unit tests to adjust for change in 
{{ResourceLocalizationService}} constructor to pass in {{NodeManagerMetrics}}.

> Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches
> --
>
> Key: YARN-5797
> URL: https://issues.apache.org/jira/browse/YARN-5797
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5797-trunk-v1.patch
>
>
> Add new metrics to the node manager around the local cache sizes and how much 
> is being cleaned from them on a regular bases. For example, we can expose 
> information contained in the {{LocalCacheCleanerStats}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5797) Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches

2016-10-31 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-5797:
---
Attachment: YARN-5797-trunk-v1.patch

Attaching v1 patch to get a qa run. Summary:

# Added metrics that expose stats from {{LocalCacheCleanerStats}}.
# Adjusted {{TestLocalCacheCleanup}} to cover metrics as well.
# Refactored other unit tests to adjust for change in 
{{ResourceLocalizationService}} constructor to pass in {{NodeManagerMetrics}}.

> Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches
> --
>
> Key: YARN-5797
> URL: https://issues.apache.org/jira/browse/YARN-5797
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5797-trunk-v1.patch
>
>
> Add new metrics to the node manager around the local cache sizes and how much 
> is being cleaned from them on a regular bases. For example, we can expose 
> information contained in the {{LocalCacheCleanerStats}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5805) Add isDebugEnabled check in ContainersMonitorImpl for debug log

2016-10-31 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5805:
---
Attachment: YARN-5805.0003.patch

Few of the cases of no new string object creation during logging have skipped 

> Add isDebugEnabled check in ContainersMonitorImpl for debug log  
> -
>
> Key: YARN-5805
> URL: https://issues.apache.org/jira/browse/YARN-5805
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: YARN-5805.0001.patch, YARN-5805.0002.patch, 
> YARN-5805.0003.patch
>
>
> LOG.debug("Tracking ProcessTree " + pId + " for the first time");
> LOG.debug("Constructing ProcessTree for : PID = " + pId
>   + " ContainerId = " + containerId);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5793) Trim configuration values in DockerLinuxContainerRuntime

2016-10-31 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624242#comment-15624242
 ] 

Tianyin Xu commented on YARN-5793:
--

thanks a lot, [~templedf]!

> Trim configuration values in DockerLinuxContainerRuntime
> 
>
> Key: YARN-5793
> URL: https://issues.apache.org/jira/browse/YARN-5793
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-5793..patch, YARN-5793.0001.patch
>
>
> The current implementation of {{DockerLinuxContainerRuntime}} does not follow 
> the practice of trimming configuration values. This leads to errors if users 
> set values containing space or newline.
> see the following YARN commits as reference:
> YARN-3395. FairScheduler: Trim whitespaces when using username for queuename.
> YARN-2869. CapacityScheduler should trim sub queue names when parse 
> configuration.
> YARN-2843. Fixed NodeLabelsManager to trim inputs for hosts and labels so as 
> to make them work correctly.
> and many other Hadoop/HDFS commits (just list a few):
> HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames
> HDFS-2799. Trim fs.checkpoint.dir values.
> HADOOP-6578. Configuration should trim whitespace around a lot of value types
> HADOOP-6534. Trim whitespace from directory lists initializing
> Patch is available against trunk
> {code:title=DockerLinuxContainerRuntime.java|borderStyle=solid}
> @@ -219,9 +219,9 @@ public void initialize(Configuration conf)
>  dockerClient = new DockerClient(conf);
>  allowedNetworks.clear();
>  allowedNetworks.addAll(Arrays.asList(
> -
> conf.getStrings(YarnConfiguration.NM_DOCKER_ALLOWED_CONTAINER_NETWORKS,
> +
> conf.getTrimmedStrings(YarnConfiguration.NM_DOCKER_ALLOWED_CONTAINER_NETWORKS,
>  
> YarnConfiguration.DEFAULT_NM_DOCKER_ALLOWED_CONTAINER_NETWORKS)));
> -defaultNetwork = conf.get(
> +defaultNetwork = conf.getTrimmed(
>  YarnConfiguration.NM_DOCKER_DEFAULT_CONTAINER_NETWORK,
>  YarnConfiguration.DEFAULT_NM_DOCKER_DEFAULT_CONTAINER_NETWORK);
>  
> @@ -237,7 +237,7 @@ public void initialize(Configuration conf)
>throw new ContainerExecutionException(message);
>  }
>  
> -privilegedContainersAcl = new AccessControlList(conf.get(
> +privilegedContainersAcl = new AccessControlList(conf.getTrimmed(
>  YarnConfiguration.NM_DOCKER_PRIVILEGED_CONTAINERS_ACL,
>  YarnConfiguration.DEFAULT_NM_DOCKER_PRIVILEGED_CONTAINERS_ACL));
>}
> @@ -439,7 +439,7 @@ public void launchContainer(ContainerRuntimeContext ctx)
>  LOCALIZED_RESOURCES);
>  @SuppressWarnings("unchecked")
>  List userLocalDirs = ctx.getExecutionAttribute(USER_LOCAL_DIRS);
> -Set capabilities = new HashSet<>(Arrays.asList(conf.getStrings(
> +Set capabilities = new 
> HashSet<>(Arrays.asList(conf.getTrimmedStrings(
>  YarnConfiguration.NM_DOCKER_CONTAINER_CAPABILITIES,
>  YarnConfiguration.DEFAULT_NM_DOCKER_CONTAINER_CAPABILITIES)));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-10-31 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15624232#comment-15624232
 ] 

Sunil G commented on YARN-2009:
---

Thank you very much [~leftnoteasy] for thorough review/commit and suggestions. 
And special thanks to [~eepayne] for the valuable thoughts and thorough 
tests/reviews, really appreciate the same. Also special thanks to [~curino] for 
sharing valuable inputs here.

> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Fix For: 2.9.0
>
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch, YARN-2009.0016.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5579) Resourcemanager should surface failed state store operation prominently

2016-10-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated YARN-5579:
-
Description: 
I found the following in Resourcemanager log when I tried to figure out why 
application got stuck in NEW_SAVING state.
{code}
2016-08-29 18:14:23,486 INFO  recovery.ZKRMStateStore 
(ZKRMStateStore.java:runWithRetries(1242)) - Maxed out ZK retries. Giving up!
2016-08-29 18:14:23,486 ERROR recovery.RMStateStore 
(RMStateStore.java:transition(205)) - Error storing app: 
application_1470517915158_0001
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:935)
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:998)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:995)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1174)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1207)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:995)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:1009)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:1042)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:639)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:201)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:183)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:955)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1036)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1031)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:745)
2016-08-29 18:14:23,486 ERROR recovery.RMStateStore 
(RMStateStore.java:notifyStoreOperationFailedInternal(987)) - State store 
operation failed
org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:935)
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:998)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:995)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1174)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1207)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:995)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:1009)
{code}

Resourcemanager should surface the above error prominently.
Likely subsequent application submission would encounter the same error.

  was:
I found the following in Resourcemanager log when I tried to figure out why 
application got stuck in NEW_SAVING state.
{code}
2016-08-29 18:14:23,486 INFO  recovery.ZKRMStateStore 
(ZKRMStateStore.java:runWithRetries(1242)) - Maxed out ZK retries. Giving up!
2016-08-29 18:14:23,486 ERROR recovery.RMStateStore 
(RMStateStore.java:transition(205)) - Error storing app: 

[jira] [Commented] (YARN-5796) Convert enums values in service code to upper case and special handling of an error

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623973#comment-15623973
 ] 

Hadoop QA commented on YARN-5796:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
42s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
9s{color} | {color:red} hadoop-yarn-services-api in yarn-native-services 
failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 in yarn-native-services has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api:
 The patch generated 0 new + 73 unchanged - 1 fixed = 73 total (was 74) {color} 
|
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
7s{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5796 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836254/YARN-5796-yarn-native-services.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux afa98bf1c147 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / e365477 |
| Default Java | 1.8.0_101 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13711/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api.txt
 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/13711/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api-warnings.html
 |
| mvnsite | 

[jira] [Comment Edited] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-10-31 Thread Qiang Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15611108#comment-15611108
 ] 

Qiang Zhang edited comment on YARN-4997 at 11/1/16 12:58 AM:
-

Hi,everyOne
this issue is very good,when this patch release?
I want to test in the ranger with the new yarn version.



was (Author: zhangqiang2):
Hi,everyOne
this issue is very good,when this patch released?
I want to test in the ranger with the new yarn version.


> Update fair scheduler to use pluggable auth provider
> 
>
> Key: YARN-4997
> URL: https://issues.apache.org/jira/browse/YARN-4997
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Tao Jie
> Attachments: YARN-4997-001.patch, YARN-4997-002.patch, 
> YARN-4997-003.patch, YARN-4997-004.patch, YARN-4997-005.patch, 
> YARN-4997-006.patch, YARN-4997-007.patch, YARN-4997-008.patch
>
>
> Now that YARN-3100 has made the authorization pluggable, it should be 
> supported by the fair scheduler.  YARN-3100 only updated the capacity 
> scheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5796) Convert enums values in service code to upper case and special handling of an error

2016-10-31 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5796:

Attachment: YARN-5796-yarn-native-services.002.patch

Based on feedback from [~jianhe] adding slightly more details to the error 
messages. Formatted a few lines to limit length to 80 chars.

> Convert enums values in service code to upper case and special handling of an 
> error
> ---
>
> Key: YARN-5796
> URL: https://issues.apache.org/jira/browse/YARN-5796
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5796-yarn-native-services.001.patch, 
> YARN-5796-yarn-native-services.002.patch
>
>
> Bug fixes -
> - Convert enums values in service code to upper case in line with YARN-5775
> - Elegantly handle the instance/directory exists error during create app (if 
> the app was previously created but is in stopped/failed state)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-10-31 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623924#comment-15623924
 ] 

Miklos Szegedi commented on YARN-5774:
--

Thank you, [~yufeigu]!

I am wondering, if we really need to compare stepFactor to Resources.none() 
here rather than stepFactor.getMemorySize() to 0. It is an edge case but what 
if stepFactor.getGetVirtualCores() is non-zero but stepFactor.getMemorySize() 
is 0? We avoid throwing an exception in this case.

{code}
  public Resource normalize(Resource r, Resource minimumResource,
  Resource maximumResource, Resource stepFactor) {
if (Resources.equals(stepFactor, Resources.none())) {
  throw new YarnRuntimeException("StepFactor resource cannot be zero!");
}

long normalizedMemory = Math.min(
roundUp(
Math.max(r.getMemorySize(), minimumResource.getMemorySize()),
stepFactor.getMemorySize()),
maximumResource.getMemorySize());
return Resources.createResource(normalizedMemory);
  }
{code}


> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5556) Support for deleting queues without requiring a RM restart

2016-10-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623917#comment-15623917
 ] 

Naganarasimha G R commented on YARN-5556:
-

Thanks [~wangda], 
IIUC its design is in YARN-5724, earlier was wondering whether this will take 
considerable amount of time hence let the basic version of it go in to 2.8 
version and based on the new design achieve it in a more better way for further 
versions...
[~xgong], Which version are we targeting this modification to go in ?  

> Support for deleting queues without requiring a RM restart
> --
>
> Key: YARN-5556
> URL: https://issues.apache.org/jira/browse/YARN-5556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Naganarasimha G R
> Attachments: YARN-5556.v1.001.patch, YARN-5556.v1.002.patch, 
> YARN-5556.v1.003.patch, YARN-5556.v1.004.patch
>
>
> Today, we could add or modify queues without restarting the RM, via a CS 
> refresh. But for deleting queue, we have to restart the ResourceManager. We 
> could support for deleting queues without requiring a RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4396) Log the trace information on FSAppAttempt#assignContainer

2016-10-31 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623906#comment-15623906
 ] 

Yiqun Lin commented on YARN-4396:
-

Thanks [~templedf] for the final review and commit!

> Log the trace information on FSAppAttempt#assignContainer
> -
>
> Key: YARN-4396
> URL: https://issues.apache.org/jira/browse/YARN-4396
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications, fairscheduler
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: oct16-easy
> Fix For: 2.9.0
>
> Attachments: YARN-4396.001.patch, YARN-4396.002.patch, 
> YARN-4396.003.patch, YARN-4396.004.patch, YARN-4396.005.patch
>
>
> When I configure the yarn.scheduler.fair.locality.threshold.node and 
> yarn.scheduler.fair.locality.threshold.rack to open this function, I have no 
> detail info of assigning container's locality. And it's important because it 
> will lead some delay scheduling and will have an influence on my cluster. If 
> I know these info, I can adjust param in cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5810) RM Loglevel setting shouldn't return a valid value for a non-existing class

2016-10-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623902#comment-15623902
 ] 

Naganarasimha G R commented on YARN-5810:
-

IIUC i believe this is little tricky, though we ensure all the classes to have 
{{LoggerFactory.getLogger();}}, getLogger is overloaded to accept String 
too, so we can pass anything as the name of the logger due to which currently 
we are accepting any string. So to enforce "RM Loglevel setting shouldn't 
return a valid value for a non-existing class", we should go by the assumption 
that Logger name is always set as classname. 
Other disadvantage being, later might be we can plan to change the logger name 
to match a group of classes (or have it as the package name), so that we need 
not manually modify the log level of individual classes.
Thoughts ?

> RM Loglevel setting shouldn't return a valid value for a non-existing class
> ---
>
> Key: YARN-5810
> URL: https://issues.apache.org/jira/browse/YARN-5810
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> The WebUI of RM Loglevel setting should not return a valid value, like INFO 
> for an non-existing class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5810) RM Loglevel setting shouldn't return a valid value for a non-existing class

2016-10-31 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-5810:
--

 Summary: RM Loglevel setting shouldn't return a valid value for a 
non-existing class
 Key: YARN-5810
 URL: https://issues.apache.org/jira/browse/YARN-5810
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.0.0-alpha1
Reporter: Yufei Gu
Assignee: Yufei Gu


The WebUI of RM Loglevel setting should not return a valid value, like INFO for 
an non-existing class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623875#comment-15623875
 ] 

Naganarasimha G R commented on YARN-5765:
-

I am assuming you are doing again {{stat(npath, ) != 0}} after {{if 
(mkdir(npath, perm) != 0)}} in the else block before the code block which you 
have shared...

We have been using the first approach {{umask(0027)}} in our code base for a 
while and i am pretty sure it should work fine, why not try that once ?

As its a blocker i plan to revert the YARN-5287 patch in 2.8, hope we can 
conclude on something earlier. 

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-31 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623833#comment-15623833
 ] 

Haibo Chen edited comment on YARN-5765 at 10/31/16 11:53 PM:
-

This is what I am doing.
{code}
 if ((sb.st_mode & S_ISUID) != 0) {
perm = perm | S_ISUID;
  }
if ((sb.st_mode & S_ISGID) != 0) {
perm = perm | S_ISGID;
}
{code}
But from the test that I have done, it seems like I am still missing something, 
this change alone does not fix everything. I am chasing it down.


was (Author: haibochen):
This is what I am doing.
{code:c}
 if ((sb.st_mode & S_ISUID) != 0) {
perm = perm | S_ISUID;
  }
if ((sb.st_mode & S_ISGID) != 0) {
perm = perm | S_ISGID;
}
{code}
But from the test that I have done, it seems like I am still missing something, 
this change alone does not fix everything. I am chasing it down.

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-31 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623833#comment-15623833
 ] 

Haibo Chen commented on YARN-5765:
--

This is what I am doing.
{code:c}
 if ((sb.st_mode & S_ISUID) != 0) {
perm = perm | S_ISUID;
  }
if ((sb.st_mode & S_ISGID) != 0) {
perm = perm | S_ISGID;
}
{code}
But from the test that I have done, it seems like I am still missing something, 
this change alone does not fix everything. I am chasing it down.

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2016-10-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623831#comment-15623831
 ] 

Naganarasimha G R commented on YARN-4675:
-

Thanks [~vrushalic], Will update the patch at the earliest and also IIUC i was 
not introducing any new changes for the client but just organizing the code so 
i think it should be fine but anyways will upload the patch at the earliest.

> Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, oct16-medium
> Attachments: YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623830#comment-15623830
 ] 

Wangda Tan commented on YARN-4734:
--

Forgot to mention: ASF license warning and unit test failures are not related 
to the patch.

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.15.patch, 
> YARN-4734.16.patch, YARN-4734.17-rebased.patch, YARN-4734.17.patch, 
> YARN-4734.18.patch, YARN-4734.19.patch, YARN-4734.2.patch, YARN-4734.3.patch, 
> YARN-4734.4.patch, YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, 
> YARN-4734.8.patch, YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5697) Use CliParser to parse options in RMAdminCLI

2016-10-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623824#comment-15623824
 ] 

Naganarasimha G R commented on YARN-5697:
-

Thanks [~wangda], IMO it doesnt break hence taking it further besides its not 
in critical flow path :)
[~Tao Jie], care to rebase the patch as per the conclusion we had and update 
the documentation(YARN-5720) too accordingly ! 

> Use CliParser to parse options in RMAdminCLI
> 
>
> Key: YARN-5697
> URL: https://issues.apache.org/jira/browse/YARN-5697
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
> Attachments: YARN-5697.001.patch, YARN-5697.002.patch, 
> YARN-5697.003.patch
>
>
> As discussed in YARN-4855, it is better to use CliParser rather than args to 
> parse command line options in RMAdminCli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-31 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623825#comment-15623825
 ] 

Jian He commented on YARN-4597:
---

Few more comments 
- maybe rename ContainerScheduler#runningContainers to scheduledContainers
- The ContainerLaunch#killedBeforeStart flag, looks like the exising flag 
'shouldLaunchContainer' serves the same purpose, can we reuse that ? if so, the 
container#isMarkedToKill is also not needed. 
- NodeManager#containerScheduler variable not used, remove
- I think this comment is not addressed ? "In case we exceed the max-queue 
length, we are killing the container directly instead of queueing the 
container, in this case, we should not store the container as queued?"

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623826#comment-15623826
 ] 

Hadoop QA commented on YARN-5716:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
28s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 28s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 146 new + 1466 unchanged - 169 fixed = 1612 total (was 1635) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 
6481 unchanged - 10 fixed = 6481 total (was 6491) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 927 unchanged - 10 fixed = 927 total (was 937) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 24s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does 

[jira] [Commented] (YARN-5765) LinuxContainerExecutor creates appcache and its subdirectories with wrong group owner.

2016-10-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623818#comment-15623818
 ] 

Naganarasimha G R commented on YARN-5765:
-

Not sure how we can conditionally(on what conditions) set the setGid before 
chmod. May be can can get your approach once you share the patch.

> LinuxContainerExecutor creates appcache and its subdirectories with wrong 
> group owner.
> --
>
> Key: YARN-5765
> URL: https://issues.apache.org/jira/browse/YARN-5765
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
>
> LinuxContainerExecutor creates usercache/\{userId\}/appcache/\{appId\} with 
> wrong group owner, causing Log aggregation and ShuffleHandler to fail because 
> node manager process does not have permission to read the files under the 
> directory.
> This can be easily reproduced by enabling LCE and submitting a MR example 
> job. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5809) AsyncDispatcher possibly invokes multiple shutdown thread when handling exception

2016-10-31 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623807#comment-15623807
 ] 

Jian He commented on YARN-5809:
---

[~varun_saxena], thanks for the review and commit !

> AsyncDispatcher possibly invokes multiple shutdown thread when handling 
> exception
> -
>
> Key: YARN-5809
> URL: https://issues.apache.org/jira/browse/YARN-5809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5809.1.patch
>
>
> below code when handling exceptions: it is possible to launch multiple 
> shutdown threads if there are events left in the queue that caused to throw 
> exceptions. 
> {code}
> } catch (Throwable t) {
>   //TODO Maybe log the state of the queue
>   LOG.fatal("Error in dispatcher thread", t);
>   // If serviceStop is called, we should exit this thread gracefully.
>   if (exitOnDispatchException
>   && (ShutdownHookManager.get().isShutdownInProgress()) == false
>   && stopped == false) {
> Thread shutDownThread = new Thread(createShutDownThread());
> shutDownThread.setName("AsyncDispatcher ShutDown handler");
> shutDownThread.start();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5265) Make HBase configuration for the timeline service configurable

2016-10-31 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623804#comment-15623804
 ] 

Sangjin Lee commented on YARN-5265:
---

To be clear, looking at {{TimelineStorageUtils}} does not need to be part of 
this JIRA. I meant it more as a future to-do item.

> Make HBase configuration for the timeline service configurable
> --
>
> Key: YARN-5265
> URL: https://issues.apache.org/jira/browse/YARN-5265
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
>  Labels: YARN-5355, oct16-medium
> Attachments: ATS v2 cluster deployment v1.png, 
> YARN-5265-YARN-2928.01.patch, YARN-5265-YARN-2928.02.patch, 
> YARN-5265-YARN-2928.03.patch, YARN-5265-YARN-2928.04.patch, 
> YARN-5265-YARN-2928.05.patch, YARN-5265-YARN-5355.06.patch, 
> YARN-5265-YARN-5355.07.patch, YARN-5265-YARN-5355.08.patch, 
> YARN-5265-YARN-5355.09.patch, YARN-5265-YARN-5355.10.patch
>
>
> Currently we create "default" HBase configurations, this works as long as the 
> user places the appropriate configuration on the classpath.
> This works fine for a standalone Hadoop cluster.
> However, if a user wants to monitor an HBase cluster and has a separate ATS 
> HBase cluster, then it can become tricky to create the right classpath for 
> the nodemanagers and still have tasks have their separate configs.
> It will be much easier to add a yarn configuration to let cluster admins 
> configure which HBase to connect to to write ATS metrics to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4998) Minor cleanup to UGI use in AdminService

2016-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623796#comment-15623796
 ] 

Hudson commented on YARN-4998:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10738 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10738/])
YARN-4998. Minor cleanup to UGI use in AdminService. (Daniel Templeton (kasha: 
rev 733aa993134ba324c712590fa92b8ef230b0839a)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java


> Minor cleanup to UGI use in AdminService
> 
>
> Key: YARN-4998
> URL: https://issues.apache.org/jira/browse/YARN-4998
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-4998.001.patch, YARN-4998.002.patch
>
>
> Instead of calling {{UserGroupInformation.getCurrentUser()}} over and over, 
> we should just use the stored {{daemonUser}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5809) AsyncDispatcher possibly invokes multiple shutdown thread when handling exception

2016-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623793#comment-15623793
 ] 

Hudson commented on YARN-5809:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10738 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10738/])
YARN-5809. AsyncDispatcher possibly invokes multiple shutdown threads 
(varunsaxena: rev 07ab89e8bb3f647cef4f80f39237169a0c6a8520)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java


> AsyncDispatcher possibly invokes multiple shutdown thread when handling 
> exception
> -
>
> Key: YARN-5809
> URL: https://issues.apache.org/jira/browse/YARN-5809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5809.1.patch
>
>
> below code when handling exceptions: it is possible to launch multiple 
> shutdown threads if there are events left in the queue that caused to throw 
> exceptions. 
> {code}
> } catch (Throwable t) {
>   //TODO Maybe log the state of the queue
>   LOG.fatal("Error in dispatcher thread", t);
>   // If serviceStop is called, we should exit this thread gracefully.
>   if (exitOnDispatchException
>   && (ShutdownHookManager.get().isShutdownInProgress()) == false
>   && stopped == false) {
> Thread shutDownThread = new Thread(createShutDownThread());
> shutDownThread.setName("AsyncDispatcher ShutDown handler");
> shutDownThread.start();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5545) App submit failure on queue with label when default queue partition capacity is zero

2016-10-31 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623788#comment-15623788
 ] 

Naganarasimha G R commented on YARN-5545:
-

Hi [~bibinchundatt],
Seems like there is no thoughts from others on this yet, so i think we can go 
ahead with the existing approach which has a drawback that we will not be able 
to set global-default-max for only few queues (having default Queue partition 
cap = 0) but will get enforced to all. If required we can introduce some new 
config later.
Additionally as we were discussing earlier can you put a check to ensure that 
total number of applications do not exceed cluster max applications ? 

> App submit failure on queue with label when default queue partition capacity 
> is zero
> 
>
> Key: YARN-5545
> URL: https://issues.apache.org/jira/browse/YARN-5545
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Attachments: YARN-5545.0001.patch, YARN-5545.0002.patch, 
> YARN-5545.0003.patch, YARN-5545.004.patch, capacity-scheduler.xml
>
>
> Configure capacity scheduler 
> yarn.scheduler.capacity.root.default.capacity=0
> yarn.scheduler.capacity.root.queue1.accessible-node-labels.labelx.capacity=50
> yarn.scheduler.capacity.root.default.accessible-node-labels.labelx.capacity=50
> Submit application as below
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-alpha2-SNAPSHOT-tests.jar
>  sleep -Dmapreduce.job.node-label-expression=labelx 
> -Dmapreduce.job.queuename=default -m 1 -r 1 -mt 1000 -rt 1
> {noformat}
> 2016-08-21 18:21:31,375 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/root/.staging/job_1471670113386_0001
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed 
> to submit application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:255)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1344)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1790)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1341)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1362)
>   at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>   at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>   at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:136)
>   at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1471670113386_0001 to YARN : 
> org.apache.hadoop.security.AccessControlException: Queue root.default already 
> has 0 applications, cannot accept submission of application: 
> application_1471670113386_0001
>   at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:286)
>   at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:296)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:301)
>   ... 25 more
> {noformat}

[jira] [Commented] (YARN-5391) FederationPolicy implementations (tieing together RouterFederationPolicy and AMRMProxyFederationPolicy)

2016-10-31 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623787#comment-15623787
 ] 

Subru Krishnan commented on YARN-5391:
--

Thanks [~curino]. +1 on the latest patch.

> FederationPolicy implementations (tieing together RouterFederationPolicy and 
> AMRMProxyFederationPolicy)
> ---
>
> Key: YARN-5391
> URL: https://issues.apache.org/jira/browse/YARN-5391
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: oct16-hard
> Attachments: YARN-5391-YARN-2915.04.patch, 
> YARN-5391-YARN-2915.05.patch, YARN-5391-YARN-2915.06.patch, 
> YARN-5391-YARN-2915.07.patch, YARN-5391.01.patch, YARN-5391.02.patch, 
> YARN-5391.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5391) FederationPolicy implementations (tieing together RouterFederationPolicy and AMRMProxyFederationPolicy)

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623769#comment-15623769
 ] 

Hadoop QA commented on YARN-5391:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 2s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835934/YARN-5391-YARN-2915.07.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cacf9a5c12fa 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 116eb1d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13710/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13710/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FederationPolicy implementations (tieing together RouterFederationPolicy and 
> AMRMProxyFederationPolicy)
> ---
>
> Key: YARN-5391
> URL: https://issues.apache.org/jira/browse/YARN-5391
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: 

[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623757#comment-15623757
 ] 

Hadoop QA commented on YARN-5774:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 299 unchanged - 9 fixed = 299 total (was 308) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 927 unchanged - 11 fixed = 927 total (was 938) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
17s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 
34s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836235/YARN-5774.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f9073096be1a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 773c60b |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 

[jira] [Commented] (YARN-4998) Minor cleanup to UGI use in AdminService

2016-10-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623712#comment-15623712
 ] 

Karthik Kambatla commented on YARN-4998:


+1, checking this in.. 

> Minor cleanup to UGI use in AdminService
> 
>
> Key: YARN-4998
> URL: https://issues.apache.org/jira/browse/YARN-4998
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-4998.001.patch, YARN-4998.002.patch
>
>
> Instead of calling {{UserGroupInformation.getCurrentUser()}} over and over, 
> we should just use the stored {{daemonUser}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5685) Non-embedded HA failover is broken

2016-10-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623708#comment-15623708
 ] 

Karthik Kambatla commented on YARN-5685:


Would it make sense to address the first two items on YARN-5709 here, namely:
# By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
the Curator-based elector is also running embedded, I feel the code should be 
checking for !curatorBased instead of isEmbeddedElector
# LeaderElectorService should probably be named 
CuratorBasedEmbeddedElectorService or some such.

> Non-embedded HA failover is broken
> --
>
> Key: YARN-5685
> URL: https://issues.apache.org/jira/browse/YARN-5685
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-hard
> Attachments: YARN-5685.001.patch, YARN-5685.002.patch
>
>
> If HA is enabled with automatic failover enabled and embedded failover 
> disabled, all RMs all come up in standby state.  To make one of them active, 
> the {{--forcemanual}} flag must be used when manually triggering the state 
> change.  Should the active go down, the standby will not become active and 
> must be manually transitioned with the {{--forcemanual}} flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5716:
-
Attachment: YARN-5716.0012.patch

Rebased to latest trunk (v12).

> Add global scheduler interface definition and update CapacityScheduler to use 
> it.
> -
>
> Key: YARN-5716
> URL: https://issues.apache.org/jira/browse/YARN-5716
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-hard
> Attachments: YARN-5716.001.patch, YARN-5716.0012.patch, 
> YARN-5716.002.patch, YARN-5716.003.patch, YARN-5716.004.patch, 
> YARN-5716.005.patch, YARN-5716.006.patch, YARN-5716.007.patch, 
> YARN-5716.008.patch, YARN-5716.009.patch, YARN-5716.010.patch, 
> YARN-5716.011.patch
>
>
> Target of this JIRA:
> - Definition of interfaces / objects which will be used by global scheduling, 
> this will be shared by different schedulers.
> - Modify CapacityScheduler to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5809) AsyncDispatcher possibly invokes multiple shutdown thread when handling exception

2016-10-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623652#comment-15623652
 ] 

Varun Saxena edited comment on YARN-5809 at 10/31/16 10:43 PM:
---

Thanks [~jianhe] for the patch.
IIUC, even if an unnecessary additional shutdown thread is created, that would 
be destroyed when JVM exits.  
But it makes sense to set the stopped flag and not let other events process 
when RM is shutting down and not starting extra shutdown threads.

+1. Will commit it.


was (Author: varun_saxena):
Thanks [~jianhe] for the patch.
IIUC, even if an unnecessary additional shutdown thread is created, that would 
be destroyed when JVM exits.  
But it makes sense to set the stopped flag and not let other events process 
when RM is shutting down.

+1. Will commit it.

> AsyncDispatcher possibly invokes multiple shutdown thread when handling 
> exception
> -
>
> Key: YARN-5809
> URL: https://issues.apache.org/jira/browse/YARN-5809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5809.1.patch
>
>
> below code when handling exceptions: it is possible to launch multiple 
> shutdown threads if there are events left in the queue that caused to throw 
> exceptions. 
> {code}
> } catch (Throwable t) {
>   //TODO Maybe log the state of the queue
>   LOG.fatal("Error in dispatcher thread", t);
>   // If serviceStop is called, we should exit this thread gracefully.
>   if (exitOnDispatchException
>   && (ShutdownHookManager.get().isShutdownInProgress()) == false
>   && stopped == false) {
> Thread shutDownThread = new Thread(createShutDownThread());
> shutDownThread.setName("AsyncDispatcher ShutDown handler");
> shutDownThread.start();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3645) ResourceManager can't start success if attribute value of "aclSubmitApps" is null in fair-scheduler.xml

2016-10-31 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623659#comment-15623659
 ] 

Gabor Liptak commented on YARN-3645:


[~kkaranasos] I reviewed the failed test 
`TestRMRestart.testFinishedAppRemovalAfterRMRestart()` and I not see it being 
related.

> ResourceManager can't start success if  attribute value of "aclSubmitApps" is 
> null in fair-scheduler.xml
> 
>
> Key: YARN-3645
> URL: https://issues.apache.org/jira/browse/YARN-3645
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: zhoulinlin
>Assignee: Gabor Liptak
>  Labels: oct16-easy
> Attachments: YARN-3645.1.patch, YARN-3645.2.patch, YARN-3645.3.patch, 
> YARN-3645.4.patch, YARN-3645.5.patch, YARN-3645.patch
>
>
> The "aclSubmitApps" is configured in fair-scheduler.xml like below:
> 
> 
>  
> The resourcemanager log:
> {noformat}
> 2015-05-14 12:59:48,623 INFO org.apache.hadoop.service.AbstractService: 
> Service ResourceManager failed in state INITED; cause: 
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: Failed 
> to initialize FairScheduler
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: Failed 
> to initialize FairScheduler
>   at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:493)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:920)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:240)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1159)
> Caused by: java.io.IOException: Failed to initialize FairScheduler
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1301)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.serviceInit(FairScheduler.java:1318)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   ... 7 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(AllocationFileLoaderService.java:458)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:337)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1299)
>   ... 9 more
> 2015-05-14 12:59:48,623 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning 
> to standby state
> 2015-05-14 12:59:48,623 INFO 
> com.zte.zdh.platformplugin.factory.YarnPlatformPluginProxyFactory: plugin 
> transitionToStandbyIn
> 2015-05-14 12:59:48,623 WARN org.apache.hadoop.service.AbstractService: When 
> stopping the service ResourceManager : java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> com.zte.zdh.platformplugin.factory.YarnPlatformPluginProxyFactory.transitionToStandbyIn(YarnPlatformPluginProxyFactory.java:71)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:997)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1058)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>   at 
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:171)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1159)
> 2015-05-14 12:59:48,623 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: Failed 
> to initialize FairScheduler
>   at 
> 

[jira] [Commented] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623660#comment-15623660
 ] 

Hudson commented on YARN-2009:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10737 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10737/])
YARN-2009. CapacityScheduler: Add intra-queue preemption for app (wangda: rev 
90dd3a8148468ac37a3f2173ad8d45e38bfcb0c9)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/PreemptionCandidatesSelector.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoCandidatesSelector.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/AbstractPreemptionEntity.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempAppPerPartition.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/IntraQueuePreemptionComputePlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/IntraQueueCandidatesSelector.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/AbstractPreemptableResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/PreemptableResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java


> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Fix For: 2.9.0
>
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> 

[jira] [Commented] (YARN-2467) Add SpanReceiverHost to ResourceManager

2016-10-31 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623657#comment-15623657
 ] 

Masatake Iwasaki commented on YARN-2467:


Thanks for pinging me, [~ozawa]. I will update the patch. I need migration from 
API of htrace3 to htrace4. 

> Add SpanReceiverHost to ResourceManager
> ---
>
> Key: YARN-2467
> URL: https://issues.apache.org/jira/browse/YARN-2467
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, resourcemanager
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>  Labels: oct16-easy
> Attachments: YARN-2467.001.patch, YARN-2467.002.patch
>
>
> Per process SpanReceiverHost should be initialized in ResourceManager in the 
> same way as NameNode and DataNode do in order to support tracing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5809) AsyncDispatcher possibly invokes multiple shutdown thread when handling exception

2016-10-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623652#comment-15623652
 ] 

Varun Saxena commented on YARN-5809:


Thanks [~jianhe] for the patch.
IIUC, even if an unnecessary additional shutdown thread is created, that would 
be destroyed when JVM exits.  
But it makes sense to set the stopped flag and not let other events process 
when RM is shutting down.

+1. Will commit it.

> AsyncDispatcher possibly invokes multiple shutdown thread when handling 
> exception
> -
>
> Key: YARN-5809
> URL: https://issues.apache.org/jira/browse/YARN-5809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5809.1.patch
>
>
> below code when handling exceptions: it is possible to launch multiple 
> shutdown threads if there are events left in the queue that caused to throw 
> exceptions. 
> {code}
> } catch (Throwable t) {
>   //TODO Maybe log the state of the queue
>   LOG.fatal("Error in dispatcher thread", t);
>   // If serviceStop is called, we should exit this thread gracefully.
>   if (exitOnDispatchException
>   && (ShutdownHookManager.get().isShutdownInProgress()) == false
>   && stopped == false) {
> Thread shutDownThread = new Thread(createShutDownThread());
> shutDownThread.setName("AsyncDispatcher ShutDown handler");
> shutDownThread.start();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5800) Delete LinuxContainerExecutor comment from yarn-default.xml

2016-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623597#comment-15623597
 ] 

Hudson commented on YARN-5800:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10736 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10736/])
YARN-5800. Delete LinuxContainerExecutor comment from yarn-default.xml 
(templedf: rev 773c60bd7bd00651dc3016799b424b9bd2233eb3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


> Delete LinuxContainerExecutor comment from yarn-default.xml
> ---
>
> Key: YARN-5800
> URL: https://issues.apache.org/jira/browse/YARN-5800
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Templeton
>Assignee: Jan Hentschel
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5800.001.patch
>
>
> In {{yarn-default.xml}} there's an extraneous comment line in the 
> {{yarn.nodemanager.container-executor.class}} property.  Since admins 
> shouldn't typically be modifying this file, this comment isn't useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-10-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2009:
-
Summary: CapacityScheduler: Add intra-queue preemption for app priority 
support  (was: Intra-queue preemption for app priority support 
ProportionalCapacityPreemptionPolicy)

> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch, YARN-2009.0016.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5800) Delete LinuxContainerExecutor comment from yarn-default.xml

2016-10-31 Thread Jan Hentschel (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623566#comment-15623566
 ] 

Jan Hentschel commented on YARN-5800:
-

[~templedf] Thanks. Should I close the pull request?

> Delete LinuxContainerExecutor comment from yarn-default.xml
> ---
>
> Key: YARN-5800
> URL: https://issues.apache.org/jira/browse/YARN-5800
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Templeton
>Assignee: Jan Hentschel
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5800.001.patch
>
>
> In {{yarn-default.xml}} there's an extraneous comment line in the 
> {{yarn.nodemanager.container-executor.class}} property.  Since admins 
> shouldn't typically be modifying this file, this comment isn't useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-10-31 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5774:
---
Attachment: YARN-5774.002.patch

[~miklos.szeg...@cloudera.com], thanks for the review. I uploaded patch 002 for 
your comments. For the min > max thing, each scheduler will do the sanity check 
while initialize itself. You can find it in function {{initScheduler}} -> 
{{validateConf}}.

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3884) RMContainerImpl transition from RESERVED to KILL apphistory status not updated

2016-10-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623548#comment-15623548
 ] 

Varun Saxena commented on YARN-3884:


[~bibinchundatt], can you rebase the patch. Also can you add a test to verify 
the behavior ?


> RMContainerImpl transition from RESERVED to KILL apphistory status not updated
> --
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: null, ] queue=default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.0846906, absoluteUsedCapacity=0.4166, numApps=1, 
> numContainers=5 clusterResource=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.default stats: default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=1.0 absoluteUsedCapacity=1.0 
> used= cluster=
> 2015-07-02 20:45:32,143 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from 
> ALLOCATED to ACQUIRED
> 2015-07-02 20:45:32,174 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Trying to fulfill reservation for application 

[jira] [Commented] (YARN-5809) AsyncDispatcher possibly invokes multiple shutdown thread when handling exception

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623546#comment-15623546
 ] 

Hadoop QA commented on YARN-5809:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5809 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836231/YARN-5809.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 98e1d88fa2bc 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a1761a8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13707/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13707/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AsyncDispatcher possibly invokes multiple shutdown thread when handling 
> exception
> -
>
> Key: YARN-5809
> URL: https://issues.apache.org/jira/browse/YARN-5809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5809.1.patch
>
>
> below code when handling exceptions: 

[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-10-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623533#comment-15623533
 ] 

Wangda Tan commented on YARN-5587:
--

Thanks [~vvasudev],

Took a quick look at the patch, some questions/comments:

1) Are Changes of ClusterNodeTrack necessary? 

2) ProfileCapability.toResource:
- Result of the {{if (resourceProfileMap != null)}} will be overwritten by {{if 
(capability.getProfile..)}}
- Should order of the two if.. be exchanged?
- Do we have a method to copy a Resource instead of using for... to clone 
ResourceInformation

3) ResourceRequest / ProfileCapability / Resource
IIUC, one of getProfileCapability and getCapability will be selected based on 
RM_RESOURCE_PROFILES_ENABLED. And 
RMServerUtils#convertProfileToResourceCapability handles this logic to set 
which is the effective capability. I have some questions/comments:
- How to handle cluster level profile update. For example, application request 
a "small" container and waiting to be allocated. Before the container 
allocated, admin updates capability of the "small" profile (can admin do things 
like this?). In this case, should we allocate container with new configured 
resource of "small" profile?

3) getProfileCapability.getProfileCapabilityOverride get preferred even if 
RM_RESOURCE_PROFILES_ENABLED is false, not sure if it is the best solution. 
Here's my suggestion about API design:
It looks like ProfileCapability#getProfileCapabilityOverride will be only used 
by application (admin won't set this field), if this is true, in existing YARN, 
we have two fields for capability -- ProfileCapability and Capability. Could we 
treat the Capability as ProfileCapability#getProfileCapabilityOverride? This 
approach has following benefits:
- Avoid putting application-required-only field to the common object 
ProfileCapability which will be used by application and admin.
- Less option to application: application only need to set 2 fields 
(capability, profile) instead of 3 (capability, profile, profile.override)
- It will also simplify some service/client logics such as RemoteRequestsTable.


> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>  Labels: oct16-hard
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.004.patch, YARN-5587-YARN-3926.005.patch, 
> YARN-5587-YARN-3926.006.patch, YARN-5587-YARN-3926.007.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5800) Delete LinuxContainerExecutor comment from yarn-default.xml

2016-10-31 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5800:
---
Assignee: Jan Hentschel

> Delete LinuxContainerExecutor comment from yarn-default.xml
> ---
>
> Key: YARN-5800
> URL: https://issues.apache.org/jira/browse/YARN-5800
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Templeton
>Assignee: Jan Hentschel
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-5800.001.patch
>
>
> In {{yarn-default.xml}} there's an extraneous comment line in the 
> {{yarn.nodemanager.container-executor.class}} property.  Since admins 
> shouldn't typically be modifying this file, this comment isn't useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-10-31 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623498#comment-15623498
 ] 

Jian He commented on YARN-5611:
---

- What is the reason to involve RMAppImpl state-machine to update the 
application timeout ? I think it's simpler to update directly, not go through 
the RMAppImpl state machine..
- This code
{code}
Map newApplicationExpireTime =
new HashMap();
newApplicationExpireTime.putAll(newParsedTimeouts);
{code}
can be changed to 
{code}
Map newApplicationExpireTime =
new HashMap(newParsedTimeouts);
{code}
- I wonder whether we need a separate wrapper class "AppUpdateAttributes" for 
the timeout, is it fine to just put application timeout in the 
ApplicationStateData class?

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5809) AsyncDispatcher possibly invokes multiple shutdown thread when handling exception

2016-10-31 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5809:
--
Attachment: YARN-5809.1.patch

> AsyncDispatcher possibly invokes multiple shutdown thread when handling 
> exception
> -
>
> Key: YARN-5809
> URL: https://issues.apache.org/jira/browse/YARN-5809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5809.1.patch
>
>
> below code when handling exceptions: it is possible to launch multiple 
> shutdown threads if there are events left in the queue that caused to throw 
> exceptions. 
> {code}
> } catch (Throwable t) {
>   //TODO Maybe log the state of the queue
>   LOG.fatal("Error in dispatcher thread", t);
>   // If serviceStop is called, we should exit this thread gracefully.
>   if (exitOnDispatchException
>   && (ShutdownHookManager.get().isShutdownInProgress()) == false
>   && stopped == false) {
> Thread shutDownThread = new Thread(createShutDownThread());
> shutDownThread.setName("AsyncDispatcher ShutDown handler");
> shutDownThread.start();
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623488#comment-15623488
 ] 

Hadoop QA commented on YARN-5694:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
48s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 115 unchanged - 3 fixed = 116 total (was 118) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5694 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835196/YARN-5694.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f83c5f3087ac 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2528bea |
| Default Java | 1.8.0_101 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/13706/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13706/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13706/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13706/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-5783) Unit tests to verify the identification of starved applications

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623491#comment-15623491
 ] 

Hadoop QA commented on YARN-5783:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
28s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 4 new + 34 unchanged - 0 fixed = 38 total (was 34) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 934 unchanged - 0 fixed = 935 total (was 934) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 
31s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt
 defines equals and uses Object.hashCode()  At 
SchedulerApplicationAttempt.java:Object.hashCode()  At 
SchedulerApplicationAttempt.java:[lines 1211-1216] |
|  |  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp
 doesn't override 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.equals(Object)
  At FiCaSchedulerApp.java:At FiCaSchedulerApp.java:[line 1] |
|  |  org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt 
doesn't override 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.equals(Object)
  At FSAppAttempt.java:At FSAppAttempt.java:[line 1] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5783 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836218/yarn-5783.YARN-4752.3.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs 

[jira] [Created] (YARN-5809) AsyncDispatcher possibly invokes multiple shutdown thread when handling exception

2016-10-31 Thread Jian He (JIRA)
Jian He created YARN-5809:
-

 Summary: AsyncDispatcher possibly invokes multiple shutdown thread 
when handling exception
 Key: YARN-5809
 URL: https://issues.apache.org/jira/browse/YARN-5809
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He


below code when handling exceptions: it is possible to launch multiple shutdown 
threads if there are events left in the queue that caused to throw exceptions. 
{code}
} catch (Throwable t) {
  //TODO Maybe log the state of the queue
  LOG.fatal("Error in dispatcher thread", t);
  // If serviceStop is called, we should exit this thread gracefully.
  if (exitOnDispatchException
  && (ShutdownHookManager.get().isShutdownInProgress()) == false
  && stopped == false) {
Thread shutDownThread = new Thread(createShutDownThread());
shutDownThread.setName("AsyncDispatcher ShutDown handler");
shutDownThread.start();
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5793) Trim configuration values in DockerLinuxContainerRuntime

2016-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623455#comment-15623455
 ] 

Hudson commented on YARN-5793:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10735 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10735/])
YARN-5793. Trim configuration values in DockerLinuxContainerRuntime (templedf: 
rev f3eb4c3c738204e099cbaa03471497c46530efbf)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java


> Trim configuration values in DockerLinuxContainerRuntime
> 
>
> Key: YARN-5793
> URL: https://issues.apache.org/jira/browse/YARN-5793
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-5793..patch, YARN-5793.0001.patch
>
>
> The current implementation of {{DockerLinuxContainerRuntime}} does not follow 
> the practice of trimming configuration values. This leads to errors if users 
> set values containing space or newline.
> see the following YARN commits as reference:
> YARN-3395. FairScheduler: Trim whitespaces when using username for queuename.
> YARN-2869. CapacityScheduler should trim sub queue names when parse 
> configuration.
> YARN-2843. Fixed NodeLabelsManager to trim inputs for hosts and labels so as 
> to make them work correctly.
> and many other Hadoop/HDFS commits (just list a few):
> HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames
> HDFS-2799. Trim fs.checkpoint.dir values.
> HADOOP-6578. Configuration should trim whitespace around a lot of value types
> HADOOP-6534. Trim whitespace from directory lists initializing
> Patch is available against trunk
> {code:title=DockerLinuxContainerRuntime.java|borderStyle=solid}
> @@ -219,9 +219,9 @@ public void initialize(Configuration conf)
>  dockerClient = new DockerClient(conf);
>  allowedNetworks.clear();
>  allowedNetworks.addAll(Arrays.asList(
> -
> conf.getStrings(YarnConfiguration.NM_DOCKER_ALLOWED_CONTAINER_NETWORKS,
> +
> conf.getTrimmedStrings(YarnConfiguration.NM_DOCKER_ALLOWED_CONTAINER_NETWORKS,
>  
> YarnConfiguration.DEFAULT_NM_DOCKER_ALLOWED_CONTAINER_NETWORKS)));
> -defaultNetwork = conf.get(
> +defaultNetwork = conf.getTrimmed(
>  YarnConfiguration.NM_DOCKER_DEFAULT_CONTAINER_NETWORK,
>  YarnConfiguration.DEFAULT_NM_DOCKER_DEFAULT_CONTAINER_NETWORK);
>  
> @@ -237,7 +237,7 @@ public void initialize(Configuration conf)
>throw new ContainerExecutionException(message);
>  }
>  
> -privilegedContainersAcl = new AccessControlList(conf.get(
> +privilegedContainersAcl = new AccessControlList(conf.getTrimmed(
>  YarnConfiguration.NM_DOCKER_PRIVILEGED_CONTAINERS_ACL,
>  YarnConfiguration.DEFAULT_NM_DOCKER_PRIVILEGED_CONTAINERS_ACL));
>}
> @@ -439,7 +439,7 @@ public void launchContainer(ContainerRuntimeContext ctx)
>  LOCALIZED_RESOURCES);
>  @SuppressWarnings("unchecked")
>  List userLocalDirs = ctx.getExecutionAttribute(USER_LOCAL_DIRS);
> -Set capabilities = new HashSet<>(Arrays.asList(conf.getStrings(
> +Set capabilities = new 
> HashSet<>(Arrays.asList(conf.getTrimmedStrings(
>  YarnConfiguration.NM_DOCKER_CONTAINER_CAPABILITIES,
>  YarnConfiguration.DEFAULT_NM_DOCKER_CONTAINER_CAPABILITIES)));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4396) Log the trace information on FSAppAttempt#assignContainer

2016-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623421#comment-15623421
 ] 

Hudson commented on YARN-4396:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10734 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10734/])
YARN-4396. Log the trace information on FSAppAttempt#assignContainer (templedf: 
rev 2528bea67ff80fae597f10e26c5f70d601af9fb1)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java


> Log the trace information on FSAppAttempt#assignContainer
> -
>
> Key: YARN-4396
> URL: https://issues.apache.org/jira/browse/YARN-4396
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications, fairscheduler
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: oct16-easy
> Fix For: 2.9.0
>
> Attachments: YARN-4396.001.patch, YARN-4396.002.patch, 
> YARN-4396.003.patch, YARN-4396.004.patch, YARN-4396.005.patch
>
>
> When I configure the yarn.scheduler.fair.locality.threshold.node and 
> yarn.scheduler.fair.locality.threshold.rack to open this function, I have no 
> detail info of assigning container's locality. And it's important because it 
> will lead some delay scheduling and will have an influence on my cluster. If 
> I know these info, I can adjust param in cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5793) Trim configuration values in DockerLinuxContainerRuntime

2016-10-31 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5793:
---
Assignee: Tianyin Xu

> Trim configuration values in DockerLinuxContainerRuntime
> 
>
> Key: YARN-5793
> URL: https://issues.apache.org/jira/browse/YARN-5793
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Attachments: YARN-5793..patch, YARN-5793.0001.patch
>
>
> The current implementation of {{DockerLinuxContainerRuntime}} does not follow 
> the practice of trimming configuration values. This leads to errors if users 
> set values containing space or newline.
> see the following YARN commits as reference:
> YARN-3395. FairScheduler: Trim whitespaces when using username for queuename.
> YARN-2869. CapacityScheduler should trim sub queue names when parse 
> configuration.
> YARN-2843. Fixed NodeLabelsManager to trim inputs for hosts and labels so as 
> to make them work correctly.
> and many other Hadoop/HDFS commits (just list a few):
> HDFS-9708. FSNamesystem.initAuditLoggers() doesn't trim classnames
> HDFS-2799. Trim fs.checkpoint.dir values.
> HADOOP-6578. Configuration should trim whitespace around a lot of value types
> HADOOP-6534. Trim whitespace from directory lists initializing
> Patch is available against trunk
> {code:title=DockerLinuxContainerRuntime.java|borderStyle=solid}
> @@ -219,9 +219,9 @@ public void initialize(Configuration conf)
>  dockerClient = new DockerClient(conf);
>  allowedNetworks.clear();
>  allowedNetworks.addAll(Arrays.asList(
> -
> conf.getStrings(YarnConfiguration.NM_DOCKER_ALLOWED_CONTAINER_NETWORKS,
> +
> conf.getTrimmedStrings(YarnConfiguration.NM_DOCKER_ALLOWED_CONTAINER_NETWORKS,
>  
> YarnConfiguration.DEFAULT_NM_DOCKER_ALLOWED_CONTAINER_NETWORKS)));
> -defaultNetwork = conf.get(
> +defaultNetwork = conf.getTrimmed(
>  YarnConfiguration.NM_DOCKER_DEFAULT_CONTAINER_NETWORK,
>  YarnConfiguration.DEFAULT_NM_DOCKER_DEFAULT_CONTAINER_NETWORK);
>  
> @@ -237,7 +237,7 @@ public void initialize(Configuration conf)
>throw new ContainerExecutionException(message);
>  }
>  
> -privilegedContainersAcl = new AccessControlList(conf.get(
> +privilegedContainersAcl = new AccessControlList(conf.getTrimmed(
>  YarnConfiguration.NM_DOCKER_PRIVILEGED_CONTAINERS_ACL,
>  YarnConfiguration.DEFAULT_NM_DOCKER_PRIVILEGED_CONTAINERS_ACL));
>}
> @@ -439,7 +439,7 @@ public void launchContainer(ContainerRuntimeContext ctx)
>  LOCALIZED_RESOURCES);
>  @SuppressWarnings("unchecked")
>  List userLocalDirs = ctx.getExecutionAttribute(USER_LOCAL_DIRS);
> -Set capabilities = new HashSet<>(Arrays.asList(conf.getStrings(
> +Set capabilities = new 
> HashSet<>(Arrays.asList(conf.getTrimmedStrings(
>  YarnConfiguration.NM_DOCKER_CONTAINER_CAPABILITIES,
>  YarnConfiguration.DEFAULT_NM_DOCKER_CONTAINER_CAPABILITIES)));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-10-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623378#comment-15623378
 ] 

Karthik Kambatla commented on YARN-5694:


I am not very particular about the method name, but do think the current name 
does not fully capture the intention. 

+1, pending Jenkins. Just manually kick started it. 

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5800) Delete LinuxContainerExecutor comment from yarn-default.xml

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623382#comment-15623382
 ] 

Hadoop QA commented on YARN-5800:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
48s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 54s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.util.TestFSDownload |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5800 |
| GITHUB PR | https://github.com/apache/hadoop/pull/149 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 6096e3596e6f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2528bea |
| Default Java | 1.8.0_101 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/13703/artifact/patchprocess/branch-mvninstall-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13703/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13703/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13703/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Delete LinuxContainerExecutor comment from yarn-default.xml
> ---
>
> Key: YARN-5800
> URL: https://issues.apache.org/jira/browse/YARN-5800
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-5800.001.patch
>
>
> In {{yarn-default.xml}} there's an extraneous comment line in the 
> 

[jira] [Commented] (YARN-4907) Make all MockRM#waitForState consistent.

2016-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623362#comment-15623362
 ] 

Hudson commented on YARN-4907:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10733 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10733/])
YARN-4907. Make all MockRM#waitForState consistent. (Contributed by (templedf: 
rev cc2c993a8af6265b9881550501fd16f783519e03)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java


> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-medium
> Fix For: 2.9.0
>
> Attachments: YARN-4907.001.patch, YARN-4907.002.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5800) Delete LinuxContainerExecutor comment from yarn-default.xml

2016-10-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623358#comment-15623358
 ] 

Varun Saxena commented on YARN-5800:


Actually I was under the impression that PR will be automatically picked up by 
Jenkins. 
Not sure why build wasn't invoked. Maybe I am mistaken.:)

> Delete LinuxContainerExecutor comment from yarn-default.xml
> ---
>
> Key: YARN-5800
> URL: https://issues.apache.org/jira/browse/YARN-5800
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-5800.001.patch
>
>
> In {{yarn-default.xml}} there's an extraneous comment line in the 
> {{yarn.nodemanager.container-executor.class}} property.  Since admins 
> shouldn't typically be modifying this file, this comment isn't useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3477) TimelineClientImpl swallows exceptions

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623345#comment-15623345
 ] 

Hadoop QA commented on YARN-3477:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-3477 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3477 |
| GITHUB PR | https://github.com/apache/hadoop/pull/47 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13704/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TimelineClientImpl swallows exceptions
> --
>
> Key: YARN-3477
> URL: https://issues.apache.org/jira/browse/YARN-3477
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: oct16-easy
> Attachments: YARN-3477-001.patch, YARN-3477-002.patch, 
> YARN-3477-trunk.003.patch, YARN-3477-trunk.004.patch
>
>
> If timeline client fails more than the retry count, the original exception is 
> not thrown. Instead some runtime exception is raised saying "retries run out"
> # the failing exception should be rethrown, ideally via 
> NetUtils.wrapException to include URL of the failing endpoing
> # Otherwise, the raised RTE should (a) state that URL and (b) set the 
> original fault as the inner cause



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5783) Unit tests to verify the identification of starved applications

2016-10-31 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623344#comment-15623344
 ] 

Karthik Kambatla commented on YARN-5783:


bq. If you're assuming a single-threaded context, there's no need to null out 
the appBeingProcessed in take().
By nulling out before the blocking call, we are avoiding a match when adding a 
starved app.

For the scenario outlined, appBeingProcessed is set to app1 only when the 
preemption thread is processing it trying to identify nodes that would match. 
As soon as that processing is done, appBeingProcessed is reset to null. If the 
app continues to be starved, the update thread attempts to queue it. The app is 
added to FSStarvedApps any time it is not actively processed by the preemption 
thread. 

bq. I don't see where tracking the app being processed is needed for the tests.
While implementing the test, I noticed a couple of things - (1) preemption 
kicks in fast enough that it does not make much sense to look at the latest 
snapshot of starved apps, (2) The app is added multiple times if we don't track 
appBeingProcessed. 

In general, since the first patch has no tests, adding more tests will lead to 
finding issues that need to be fixed. 

Also, this branch is weird in that this could just be one JIRA instead of 
multiple JIRAs/commits. The branch is only so the development is in the open 
and reviews stay small. For the merge, I would likely want to commit the entire 
branch as a single commit. So, as long as the final output is reasonable, I 
wouldn't worry too much about which JIRA adds what. 

> Unit tests to verify the identification of starved applications
> ---
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4907) Make all MockRM#waitForState consistent.

2016-10-31 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623332#comment-15623332
 ] 

Yufei Gu commented on YARN-4907:


Thank you for the review and commit, [~templedf]. Thanks for the review 
[~miklos.szeg...@cloudera.com].

> Make all MockRM#waitForState consistent. 
> -
>
> Key: YARN-4907
> URL: https://issues.apache.org/jira/browse/YARN-4907
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-medium
> Fix For: 2.9.0
>
> Attachments: YARN-4907.001.patch, YARN-4907.002.patch
>
>
> There are some inconsistencies among these {{waitForState}} in {{MockRM}}:
> 1. Some {{waitForState}} return a boolean while others don't.  
> 2. Some {{waitForState}} don't have a timeout, they can wait for ever. 
> 3. Some {{waitForState}} use LOG.info and others use {{System.out.println}} 
> to print messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5783) Unit tests to verify the identification of starved applications

2016-10-31 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5783:
---
Attachment: yarn-5783.YARN-4752.3.patch

> Unit tests to verify the identification of starved applications
> ---
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5001) Aggregated Logs root directory is created with wrong group if nonexistent

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623316#comment-15623316
 ] 

Hadoop QA commented on YARN-5001:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
54s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836210/yarn5001.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5ae9fbc2a76c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a9d68d2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13701/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13701/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Aggregated Logs root directory is created with wrong group if nonexistent 
> --
>
> Key: YARN-5001
> URL: https://issues.apache.org/jira/browse/YARN-5001
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation, nodemanager, security
>Affects Versions: 2.7.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: oct16-easy
>   

[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623310#comment-15623310
 ] 

Hadoop QA commented on YARN-5587:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
51s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
10s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
18s{color} | {color:green} hadoop-yarn-project_hadoop-yarn generated 0 new + 38 
unchanged - 1 fixed = 38 total (was 39) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 32 new + 897 unchanged - 7 fixed = 929 total (was 904) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 4 new + 156 unchanged - 0 fixed = 160 total (was 156) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 11s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 44s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 54s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | hadoop.yarn.client.api.impl.TestAMRMClient |
|   | hadoop.yarn.client.api.impl.TestNMClient |
|   | hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| 

[jira] [Commented] (YARN-5800) Delete LinuxContainerExecutor comment from yarn-default.xml

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623300#comment-15623300
 ] 

Hadoop QA commented on YARN-5800:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5800 |
| GITHUB PR | https://github.com/apache/hadoop/pull/149 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux ec0d39dd5f21 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a9d68d2 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13702/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13702/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Delete LinuxContainerExecutor comment from yarn-default.xml
> ---
>
> Key: YARN-5800
> URL: https://issues.apache.org/jira/browse/YARN-5800
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-5800.001.patch
>
>
> In {{yarn-default.xml}} there's an extraneous comment line in the 
> {{yarn.nodemanager.container-executor.class}} property.  Since admins 
> shouldn't typically be modifying this file, this comment isn't useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (YARN-3645) ResourceManager can't start success if attribute value of "aclSubmitApps" is null in fair-scheduler.xml

2016-10-31 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623286#comment-15623286
 ] 

Konstantinos Karanasos commented on YARN-3645:
--

Thanks for the new patch, [~gliptak].

I see there is one test failing. Can you please check if that is related?
Otherwise, the patch looks good to me.

bq. Elements "aclSubmitApps", "aclAdministerApps", "aclAdministerReservations", 
"aclListReservations", "aclSubmitReservations" do not call trim() in the 
current code. Are these also expected to call trim()?
[~kasha] If those properties should also call trim(), then we can push trim() 
inside the readFieldText() method to simplify the code.

Other than that, and after double-checking that the test is not related, let's 
commit the patch.

> ResourceManager can't start success if  attribute value of "aclSubmitApps" is 
> null in fair-scheduler.xml
> 
>
> Key: YARN-3645
> URL: https://issues.apache.org/jira/browse/YARN-3645
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: zhoulinlin
>Assignee: Gabor Liptak
>  Labels: oct16-easy
> Attachments: YARN-3645.1.patch, YARN-3645.2.patch, YARN-3645.3.patch, 
> YARN-3645.4.patch, YARN-3645.5.patch, YARN-3645.patch
>
>
> The "aclSubmitApps" is configured in fair-scheduler.xml like below:
> 
> 
>  
> The resourcemanager log:
> {noformat}
> 2015-05-14 12:59:48,623 INFO org.apache.hadoop.service.AbstractService: 
> Service ResourceManager failed in state INITED; cause: 
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: Failed 
> to initialize FairScheduler
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: Failed 
> to initialize FairScheduler
>   at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:493)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:920)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:240)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1159)
> Caused by: java.io.IOException: Failed to initialize FairScheduler
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1301)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.serviceInit(FairScheduler.java:1318)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   ... 7 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.loadQueue(AllocationFileLoaderService.java:458)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:337)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1299)
>   ... 9 more
> 2015-05-14 12:59:48,623 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning 
> to standby state
> 2015-05-14 12:59:48,623 INFO 
> com.zte.zdh.platformplugin.factory.YarnPlatformPluginProxyFactory: plugin 
> transitionToStandbyIn
> 2015-05-14 12:59:48,623 WARN org.apache.hadoop.service.AbstractService: When 
> stopping the service ResourceManager : java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> com.zte.zdh.platformplugin.factory.YarnPlatformPluginProxyFactory.transitionToStandbyIn(YarnPlatformPluginProxyFactory.java:71)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToStandby(ResourceManager.java:997)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1058)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>   at 
> 

[jira] [Updated] (YARN-5800) Delete LinuxContainerExecutor comment from yarn-default.xml

2016-10-31 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5800:
---
Attachment: YARN-5800.001.patch

I think you forgot to attach the patch [~varun_saxena]. :)

> Delete LinuxContainerExecutor comment from yarn-default.xml
> ---
>
> Key: YARN-5800
> URL: https://issues.apache.org/jira/browse/YARN-5800
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Templeton
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-5800.001.patch
>
>
> In {{yarn-default.xml}} there's an extraneous comment line in the 
> {{yarn.nodemanager.container-executor.class}} property.  Since admins 
> shouldn't typically be modifying this file, this comment isn't useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5001) Aggregated Logs root directory is created with wrong group if nonexistent

2016-10-31 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5001:
-
Attachment: yarn5001.003.patch

Uploading another patch to address Jason's latest comments. Thanks for your 
quick review, [~jlowe].

> Aggregated Logs root directory is created with wrong group if nonexistent 
> --
>
> Key: YARN-5001
> URL: https://issues.apache.org/jira/browse/YARN-5001
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation, nodemanager, security
>Affects Versions: 2.7.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: oct16-easy
> Attachments: yarn5001.001.patch, yarn5001.002.patch, 
> yarn5001.003.patch
>
>
> The directory /tmp/logs, where the aggregated logs go, is supposed to be read 
> by JHS. But if it is not created beforehand, it will be created by 
> NodeManager with the group being the superuser set in HDFS.  Files created 
> under this directory will then inherit the supergroup as their groups. This 
> leads to JHS to fail to read the container logs files under that directory if 
> JHS is not running as a user that belongs to superuser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623112#comment-15623112
 ] 

Hadoop QA commented on YARN-5761:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 910 unchanged - 17 fixed = 912 total (was 927) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 
32s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836199/YARN-5761.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0615bb1c36db 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f646fe3 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13699/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13699/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13699/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: 

[jira] [Commented] (YARN-5001) Aggregated Logs root directory is created with wrong group if nonexistent

2016-10-31 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623085#comment-15623085
 ] 

Jason Lowe commented on YARN-5001:
--

Thanks for updating the patch!

Looks good with one nit: we don't need two variables to track whether we have a 
primary group.  hasPrimaryGroup is simply primaryGroup != null, and it's 
simpler to just check it directly than track it in a boolean.  May be worth 
logging a warning in the case there is no primary group that the group 
permissions may need admin intervention to fix the issue with visibility for 
the JHS.


> Aggregated Logs root directory is created with wrong group if nonexistent 
> --
>
> Key: YARN-5001
> URL: https://issues.apache.org/jira/browse/YARN-5001
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation, nodemanager, security
>Affects Versions: 2.7.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: oct16-easy
> Attachments: yarn5001.001.patch, yarn5001.002.patch
>
>
> The directory /tmp/logs, where the aggregated logs go, is supposed to be read 
> by JHS. But if it is not created beforehand, it will be created by 
> NodeManager with the group being the superuser set in HDFS.  Files created 
> under this directory will then inherit the supergroup as their groups. This 
> leads to JHS to fail to read the container logs files under that directory if 
> JHS is not running as a user that belongs to superuser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5746) The state of the parentQueue and its childQueues should be synchronized.

2016-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623042#comment-15623042
 ] 

Hadoop QA commented on YARN-5746:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 67 unchanged - 0 fixed = 68 total (was 67) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 
11s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5746 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836187/YARN-5746.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f5dfe6298158 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f646fe3 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13698/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13698/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13698/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> The state of the parentQueue and its childQueues should be synchronized.
> 
>
>   

[jira] [Commented] (YARN-65) Reduce RM app memory footprint once app has completed

2016-10-31 Thread Denis Bolshakov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-65?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623028#comment-15623028
 ] 

Denis Bolshakov commented on YARN-65:
-

Thanks for such detailed comment. I will investigate more deeply.

Kind regards,
Denis

31 Окт 2016 г. 21:00 пользователь "Jason Lowe (JIRA)" 



> Reduce RM app memory footprint once app has completed
> -
>
> Key: YARN-65
> URL: https://issues.apache.org/jira/browse/YARN-65
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 0.23.3
>Reporter: Jason Lowe
>
> The ResourceManager holds onto a configurable number of completed 
> applications (yarn.resource.max-completed-applications, defaults to 1), 
> and the memory footprint of these completed applications can be significant.  
> For example, the {{submissionContext}} in RMAppImpl contains references to 
> protocolbuffer objects and other items that probably aren't necessary to keep 
> around once the application has completed.  We could significantly reduce the 
> memory footprint of the RM by releasing objects that are no longer necessary 
> once an application completes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5788) AM limit resource in UI and REST not updated after -replaceLabelsOnNode

2016-10-31 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15623022#comment-15623022
 ] 

Varun Saxena commented on YARN-5788:


Thanks [~Naganarasimha] and [~leftnoteasy] for sharing your views.

Even I agree performance wise it should be fine because calls for replace 
labels on node should be infrequent and in a batch.

bq. note that the same happens for NodeAdded event too (where in we could have 
called for specific partition alone).
True that.

 Will commit it later today unless there are further comments.


> AM limit resource in UI and REST not updated after -replaceLabelsOnNode
> ---
>
> Key: YARN-5788
> URL: https://issues.apache.org/jira/browse/YARN-5788
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: Screenshot from 2016-10-27 22-31-46.png, 
> YARN-5788.0001.patch, YARN-5788.0002.patch
>
>
> Steps to reproduce
> ==
> # Enable node labels
> # Configure capacity scheduler xml with label capacity configuration
> # Add labelx to cluster
> # Replace Node labels on node1
> # Check scheduler and Rest API
> *Actual*
> Am limit in scheduler UI is still based on old resource
> *Expected*
> AM limit to be updated based new partition resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5265) Make HBase configuration for the timeline service configurable

2016-10-31 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622995#comment-15622995
 ] 

Sangjin Lee commented on YARN-5265:
---

Thanks [~jrottinghuis] for the patch! I think it's almost there.

(YarnConfiguration.java)
- l.2008-2011: cosmetic: the indentation is off by 1

(TimelineServiceV2.md)
- We might want to say “For example,” before the snippet of the configuration 
to make it clearer it is an example value
- l.253-266: the snippet of the configuration should be escaped so as not to 
break html. It can be escaped by {{```}} before and after it.

We should look at {{TimelineStorageUtils}} later to see if we can refactor 
HBase-related methods into their own. Eventually we should have a clean 
separation between generic timeline service code and the HBase-specific code.


> Make HBase configuration for the timeline service configurable
> --
>
> Key: YARN-5265
> URL: https://issues.apache.org/jira/browse/YARN-5265
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
>  Labels: YARN-5355, oct16-medium
> Attachments: ATS v2 cluster deployment v1.png, 
> YARN-5265-YARN-2928.01.patch, YARN-5265-YARN-2928.02.patch, 
> YARN-5265-YARN-2928.03.patch, YARN-5265-YARN-2928.04.patch, 
> YARN-5265-YARN-2928.05.patch, YARN-5265-YARN-5355.06.patch, 
> YARN-5265-YARN-5355.07.patch, YARN-5265-YARN-5355.08.patch, 
> YARN-5265-YARN-5355.09.patch, YARN-5265-YARN-5355.10.patch
>
>
> Currently we create "default" HBase configurations, this works as long as the 
> user places the appropriate configuration on the classpath.
> This works fine for a standalone Hadoop cluster.
> However, if a user wants to monitor an HBase cluster and has a separate ATS 
> HBase cluster, then it can become tricky to create the right classpath for 
> the nodemanagers and still have tasks have their separate configs.
> It will be much easier to add a yarn configuration to let cluster admins 
> configure which HBase to connect to to write ATS metrics to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5587) Add support for resource profiles

2016-10-31 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5587:

Attachment: YARN-5587-YARN-3926.007.patch

Uploaded a new patch to fix the findbugs errors. [~asuresh] - question about 
RemoteRequestsTable - is it meant to be public to AMs or can we change APIs for 
it without any issues?

> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>  Labels: oct16-hard
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.004.patch, YARN-5587-YARN-3926.005.patch, 
> YARN-5587-YARN-3926.006.patch, YARN-5587-YARN-3926.007.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4218) Metric for resource*time that was preempted

2016-10-31 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622971#comment-15622971
 ] 

Eric Payne commented on YARN-4218:
--

[~lichangleo], as for the trunk patch, please go ahead and fix the 
{{hadoop-yarn-api}} javadoc warnings. I don't think you can fix the 
{{hadoop-yarn-server-resourcemanager}} warnings because it's related to the 
underscore:
bq. RMAppBlock.java:179: warning: '_' used as an identifier

> Metric for resource*time that was preempted
> ---
>
> Key: YARN-4218
> URL: https://issues.apache.org/jira/browse/YARN-4218
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4218.2.patch, YARN-4218.2.patch, YARN-4218.2.patch, 
> YARN-4218.2.patch, YARN-4218.3.patch, YARN-4218.4.patch, YARN-4218.5.patch, 
> YARN-4218.branch-2.2.patch, YARN-4218.branch-2.patch, YARN-4218.patch, 
> YARN-4218.trunk.2.patch, YARN-4218.trunk.3.patch, YARN-4218.trunk.patch, 
> YARN-4218.wip.patch, screenshot-1.png, screenshot-2.png, screenshot-3.png
>
>
> After YARN-415 we have the ability to track the resource*time footprint of a 
> job and preemption metrics shows how many containers were preempted on a job. 
> However we don't have a metric showing the resource*time footprint cost of 
> preemption. In other words, we know how many containers were preempted but we 
> don't have a good measure of how much work was lost as a result of preemption.
> We should add this metric so we can analyze how much work preemption is 
> costing on a grid and better track which jobs were heavily impacted by it. A 
> job that has 100 containers preempted that only lasted a minute each and were 
> very small is going to be less impacted than a job that only lost a single 
> container but that container was huge and had been running for 3 days.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-10-31 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622973#comment-15622973
 ] 

Xuan Gong commented on YARN-5761:
-

Fix the checkstyle issue

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5761) Separate QueueManager from Scheduler

2016-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5761:

Attachment: YARN-5761.2.patch

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5268) DShell AM fails java.lang.InterruptedException

2016-10-31 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5268:
-
Assignee: (was: Tan, Wangda)

> DShell AM fails java.lang.InterruptedException
> --
>
> Key: YARN-5268
> URL: https://issues.apache.org/jira/browse/YARN-5268
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Priority: Critical
>  Labels: oct16-easy
> Attachments: YARN-5268.1.patch
>
>
> Distributed Shell AM failed with the following error
> {Code}
> 16/06/16 11:08:10 INFO impl.NMClientAsyncImpl: NMClient stopped.
> 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Application 
> completed. Signalling finish to RM
> 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Diagnostics., 
> total=16, completed=19, allocated=21, failed=4
> 16/06/16 11:08:10 INFO impl.AMRMClientImpl: Waiting for application to be 
> successfully unregistered.
> 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Application Master 
> failed. exiting
> 16/06/16 11:08:10 INFO impl.AMRMClientAsyncImpl: Interrupted while waiting 
> for queue
> java.lang.InterruptedException
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
>   at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>   at 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287)
> End of LogType:AppMaster.stderr
> {Code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5268) DShell AM fails java.lang.InterruptedException

2016-10-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622947#comment-15622947
 ] 

Wangda Tan commented on YARN-5268:
--

Thanks for review, [~varun_saxena]!

This patch is small but it is a behavior change, not completely sure if it is 
safe. I'm a little short of bandwidth to verify safety of the patch. 
Unassigning, if anybody has interests to work on the JIRA, please feel free to 
take it!

> DShell AM fails java.lang.InterruptedException
> --
>
> Key: YARN-5268
> URL: https://issues.apache.org/jira/browse/YARN-5268
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
>  Labels: oct16-easy
> Attachments: YARN-5268.1.patch
>
>
> Distributed Shell AM failed with the following error
> {Code}
> 16/06/16 11:08:10 INFO impl.NMClientAsyncImpl: NMClient stopped.
> 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Application 
> completed. Signalling finish to RM
> 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Diagnostics., 
> total=16, completed=19, allocated=21, failed=4
> 16/06/16 11:08:10 INFO impl.AMRMClientImpl: Waiting for application to be 
> successfully unregistered.
> 16/06/16 11:08:10 INFO distributedshell.ApplicationMaster: Application Master 
> failed. exiting
> 16/06/16 11:08:10 INFO impl.AMRMClientAsyncImpl: Interrupted while waiting 
> for queue
> java.lang.InterruptedException
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
>   at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>   at 
> org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:287)
> End of LogType:AppMaster.stderr
> {Code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5697) Use CliParser to parse options in RMAdminCLI

2016-10-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622942#comment-15622942
 ] 

Wangda Tan commented on YARN-5697:
--

Thanks [~Naganarasimha] for the summary, agree with all your points. Just go 
ahead review/commit the patch if it does not break compatibility :)



> Use CliParser to parse options in RMAdminCLI
> 
>
> Key: YARN-5697
> URL: https://issues.apache.org/jira/browse/YARN-5697
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
> Attachments: YARN-5697.001.patch, YARN-5697.002.patch, 
> YARN-5697.003.patch
>
>
> As discussed in YARN-4855, it is better to use CliParser rather than args to 
> parse command line options in RMAdminCli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5556) Support for deleting queues without requiring a RM restart

2016-10-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622918#comment-15622918
 ] 

Wangda Tan commented on YARN-5556:
--

Sorry for jumping in late, not sure if deleting queues should use YARN-5746. 
According to design doc of YARN-5746, queue could be deleted only if queue's 
state becomes STOPPED.

> Support for deleting queues without requiring a RM restart
> --
>
> Key: YARN-5556
> URL: https://issues.apache.org/jira/browse/YARN-5556
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Naganarasimha G R
> Attachments: YARN-5556.v1.001.patch, YARN-5556.v1.002.patch, 
> YARN-5556.v1.003.patch, YARN-5556.v1.004.patch
>
>
> Today, we could add or modify queues without restarting the RM, via a CS 
> refresh. But for deleting queue, we have to restart the ResourceManager. We 
> could support for deleting queues without requiring a RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3854) Add localization support for docker images

2016-10-31 Thread luhuichun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuichun reassigned YARN-3854:
---

Assignee: luhuichun  (was: Zhankun Tang)

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: luhuichun
> Attachments: YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5670) Add support for Docker image clean up

2016-10-31 Thread luhuichun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuichun reassigned YARN-5670:
---

Assignee: luhuichun

> Add support for Docker image clean up
> -
>
> Key: YARN-5670
> URL: https://issues.apache.org/jira/browse/YARN-5670
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: luhuichun
>
> Regarding to Docker image localization, we also need a way to clean up the 
> old/stale Docker image to save storage space. We may extend deletion service 
> to utilize "docker rm" to do this.
> This is related to YARN-3854 and may depend on its implementation. Please 
> refer to YARN-3854 for Docker image localization details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5669) Add support for Docker pull

2016-10-31 Thread luhuichun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuichun reassigned YARN-5669:
---

Assignee: luhuichun

> Add support for Docker pull
> ---
>
> Key: YARN-5669
> URL: https://issues.apache.org/jira/browse/YARN-5669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: luhuichun
>
> We need to add docker pull to support Docker image localization. Refer to 
> YARN-3854 for the details. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5746) The state of the parentQueue and its childQueues should be synchronized.

2016-10-31 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5746:

Attachment: YARN-5746.3.patch

> The state of the parentQueue and its childQueues should be synchronized.
> 
>
> Key: YARN-5746
> URL: https://issues.apache.org/jira/browse/YARN-5746
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-easy
> Attachments: YARN-5746.1.patch, YARN-5746.2.patch, YARN-5746.3.patch
>
>
> The state of the parentQueue and its childQeues need to be synchronized. 
> * If the state of the parentQueue becomes STOPPED, the state of its 
> childQueue need to become STOPPED as well. 
> * If we change the state of the queue to RUNNING, we should make sure the 
> state of all its ancestor must be RUNNING. Otherwise, we need to fail this 
> operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5534) Allow whitelisted volume mounts

2016-10-31 Thread luhuichun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuichun updated YARN-5534:

Attachment: YARN-5534.001.patch

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: luhuichun
> Attachments: YARN-5534.001.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5788) AM limit resource in UI and REST not updated after -replaceLabelsOnNode

2016-10-31 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622891#comment-15622891
 ] 

Wangda Tan commented on YARN-5788:
--

Thanks [~bibinchundatt] for reporting/working on the patch and 
[~varun_saxena]/[~Naganarasimha] for reviews.

The latest patch looks good to me,  +1.

I agree that the node label update is not a very frequency op in known use 
cases. And this patch only refreshes cluster resource once per batch update.

Thoughts?

> AM limit resource in UI and REST not updated after -replaceLabelsOnNode
> ---
>
> Key: YARN-5788
> URL: https://issues.apache.org/jira/browse/YARN-5788
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: Screenshot from 2016-10-27 22-31-46.png, 
> YARN-5788.0001.patch, YARN-5788.0002.patch
>
>
> Steps to reproduce
> ==
> # Enable node labels
> # Configure capacity scheduler xml with label capacity configuration
> # Add labelx to cluster
> # Replace Node labels on node1
> # Check scheduler and Rest API
> *Actual*
> Am limit in scheduler UI is still based on old resource
> *Expected*
> AM limit to be updated based new partition resource



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-10-31 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-5808:


Assignee: Billie Rinaldi

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5746) The state of the parentQueue and its childQueues should be synchronized.

2016-10-31 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15622889#comment-15622889
 ] 

Xuan Gong commented on YARN-5746:
-

Thanks for the review. [~ozawa] [~templedf] 
bq. I would like to suggest that we create new state, QueueState.NOT_FOUND, and 
return it instead of returning null. What do you think?

I would like to keep null as the return value here.  In the design doc of 
YARN-5724, we will create a State Machine for the queue, NOT_FOUND does not 
sound like a valid state for me.

Uploaded a new patch to address all other comments.

> The state of the parentQueue and its childQueues should be synchronized.
> 
>
> Key: YARN-5746
> URL: https://issues.apache.org/jira/browse/YARN-5746
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-easy
> Attachments: YARN-5746.1.patch, YARN-5746.2.patch
>
>
> The state of the parentQueue and its childQeues need to be synchronized. 
> * If the state of the parentQueue becomes STOPPED, the state of its 
> childQueue need to become STOPPED as well. 
> * If we change the state of the queue to RUNNING, we should make sure the 
> state of all its ancestor must be RUNNING. Otherwise, we need to fail this 
> operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >