[jira] [Commented] (YARN-6373) [YARN-3368] Improvements in cluster-overview page in YARN-UI

2017-10-04 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192498#comment-16192498
 ] 

Sunil G commented on YARN-6373:
---

[~GergelyNovak]
I tested latest code in trunk. I see that there is a bug in showing Queue donut 
chart in Cluster Overview page. I had 3 queues in cluster, and i can see that 
the Queue donut charge for resource usage is not shown.
Could you please help to check.

Extremely sorry for delay from my end for this verification, Thank you for the 
effort.

> [YARN-3368] Improvements in cluster-overview page in YARN-UI
> 
>
> Key: YARN-6373
> URL: https://issues.apache.org/jira/browse/YARN-6373
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Gergely Novák
> Attachments: YARN-6373.001.patch, YARN-6373.002.patch, 
> YARN-6373.003.patch
>
>
> # Make appld and queueName clickable to navigate to respective pages.
> # Flow layout for panels in cluster-overview page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7279) Fix typo in helper message of ContainerLauncher

2017-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192490#comment-16192490
 ] 

Hudson commented on YARN-7279:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13030 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13030/])
YARN-7279. Fix typo in helper message of ContainerLauncher. Contributed 
(sunilg: rev 592bf2d550a07ea5c5df3ba0ab2952c34d941b4b)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java


> Fix typo in helper message of ContainerLauncher
> ---
>
> Key: YARN-7279
> URL: https://issues.apache.org/jira/browse/YARN-7279
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
> Fix For: 3.0.0, 3.1.0
>
> Attachments: YARN-7279.001.patch
>
>
> YARN-6999 implemented additional output in case of mapreduce class not found 
> error.
> During the test of 3.0.0-beta1-RC0 I found that it is committed with typo 
> error (an unnecessary space). Not a big deal, but currently it can't be 
> copy-pasted...
> I upload the obvious fix.
> (BTW. I like the change of YARN-6999, much more user friendly!)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7285) ContainerExecutor always launches with priorities due to yarn-default property

2017-10-04 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192485#comment-16192485
 ] 

Naganarasimha G R commented on YARN-7285:
-

Thanks [~jlowe], latest patch LGTM, will commit it shortly!


> ContainerExecutor always launches with priorities due to yarn-default property
> --
>
> Key: YARN-7285
> URL: https://issues.apache.org/jira/browse/YARN-7285
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Minor
> Attachments: YARN-7285.001.patch, YARN-7285.002.patch
>
>
> ContainerExecutor will launch containers with a specified priority if a 
> priority adjustment is specified, otherwise with the OS default priority if 
> it is unspecified.  YARN-3069 added 
> yarn.nodemanager.container-executor.os.sched.priority.adjustment to 
> yarn-default.xml, so it is always specified even if the user did not 
> explicitly set it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7279) Fix typo in helper message of ContainerLauncher

2017-10-04 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7279:
--
Summary: Fix typo in helper message of ContainerLauncher  (was: Small typo 
in the helper message of ContainerLauncher.java)

> Fix typo in helper message of ContainerLauncher
> ---
>
> Key: YARN-7279
> URL: https://issues.apache.org/jira/browse/YARN-7279
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: YARN-7279.001.patch
>
>
> YARN-6999 implemented additional output in case of mapreduce class not found 
> error.
> During the test of 3.0.0-beta1-RC0 I found that it is committed with typo 
> error (an unnecessary space). Not a big deal, but currently it can't be 
> copy-pasted...
> I upload the obvious fix.
> (BTW. I like the change of YARN-6999, much more user friendly!)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7287) Fix typo in ContainerLaunch

2017-10-04 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-7287.
---
Resolution: Duplicate

Marking as duplicate to YARN-7279

> Fix typo in ContainerLaunch
> ---
>
> Key: YARN-7287
> URL: https://issues.apache.org/jira/browse/YARN-7287
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Priority: Trivial
>
> Fix typo in ContainerLaunch.
> {code}
> .append("  mapreduce.reduce.e nv\n")
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192473#comment-16192473
 ] 

Rohith Sharma K S commented on YARN-7289:
-

Hi [~miklos.szeg...@cloudera.com], for this tests increasing test timeout 
doesn't help. The test case should complete with in 60 seconds, otherwise some 
issue in code I suspect. Can you tell why did you feel increasing timeout will 
fix the issue?

> TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
> ---
>
> Key: YARN-7289
> URL: https://issues.apache.org/jira/browse/YARN-7289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7289.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192455#comment-16192455
 ] 

Hadoop QA commented on YARN-7290:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 46m 
25s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7290 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890474/YARN-7290.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 960219485c4d 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e6e614e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17790/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17790/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> canContainerBePreempted can return true when it shouldn't
> -
>
> Key: YARN-7290
> URL: 

[jira] [Commented] (YARN-7258) Add Node and Rack Hints to Opportunistic Scheduler

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192449#comment-16192449
 ] 

Hadoop QA commented on YARN-7258:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 19 unchanged - 0 fixed = 21 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
59s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 24s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue 

[jira] [Commented] (YARN-7258) Add Node and Rack Hints to Opportunistic Scheduler

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192443#comment-16192443
 ] 

Hadoop QA commented on YARN-7258:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 20 unchanged - 0 fixed = 22 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 14s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | hadoop.yarn.client.api.impl.TestNMClient |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7258 |
| JIRA Patch 

[jira] [Updated] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Steven Rand (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand updated YARN-7290:
--
Attachment: YARN-7290.002.patch

Adding a new patch to make checkstyles happy. The tests in 
TestOpportunisticContainerAllocatorAMService all pass for me locally despite 
the failure in the last Jenkins run.

> canContainerBePreempted can return true when it shouldn't
> -
>
> Key: YARN-7290
> URL: https://issues.apache.org/jira/browse/YARN-7290
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Steven Rand
>Assignee: Steven Rand
> Attachments: YARN-7290.001.patch, YARN-7290.002.patch, 
> YARN-7290-failing-test.patch
>
>
> In FSAppAttempt#canContainerBePreempted, we make sure that preempting the 
> given container would not put the app below its fair share:
> {code}
> // Check if the app's allocation will be over its fairshare even
> // after preempting this container
> Resource usageAfterPreemption = Resources.clone(getResourceUsage());
> // Subtract resources of containers already queued for preemption
> synchronized (preemptionVariablesLock) {
>   Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
> }
> // Subtract this container's allocation to compute usage after preemption
> Resources.subtractFrom(
> usageAfterPreemption, container.getAllocatedResource());
> return !isUsageBelowShare(usageAfterPreemption, getFairShare());
> {code}
> However, this only considers one container in isolation, and fails to 
> consider containers for the same app that we already added to 
> {{preemptableContainers}} in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. Therefore we can have a 
> case where we preempt multiple containers from the same app, none of which by 
> itself puts the app below fair share, but which cumulatively do so.
> I've attached a patch with a test to show this behavior. The flow is:
> 1. Initially greedyApp runs in {{root.preemptable.child-1}} and is allocated 
> all the resources (8g and 8vcores)
> 2. Then starvingApp runs in {{root.preemptable.child-2}} and requests 2 
> containers, each of which is 3g and 3vcores in size. At this point both 
> greedyApp and starvingApp have a fair share of 4g (with DRF not in use).
> 3. For the first container requested by starvedApp, we (correctly) preempt 3 
> containers from greedyApp, each of which is 1g and 1vcore.
> 4. For the second container requested by starvedApp, we again (this time 
> incorrectly) preempt 3 containers from greedyApp. This puts greedyApp below 
> its fair share, but happens anyway because all six times that we call 
> {{return !isUsageBelowShare(usageAfterPreemption, getFairShare());}}, the 
> value of {{usageAfterPreemption}} is 7g and 7vcores (confirmed using 
> debugger).
> So in addition to accounting for {{resourcesToBePreempted}}, we also need to 
> account for containers that we're already planning on preempting in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192403#comment-16192403
 ] 

Hadoop QA commented on YARN-7290:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 4 unchanged - 0 fixed = 10 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7290 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890465/YARN-7290.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6146f6e5b4c9 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cae1c73 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17787/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17787/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| 

[jira] [Commented] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192401#comment-16192401
 ] 

Hadoop QA commented on YARN-7262:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  5s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 276 unchanged - 0 fixed = 285 total (was 276) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
41s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 44m 
41s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7262 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890461/YARN-7262.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 98004f956c5a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |

[jira] [Commented] (YARN-5329) Placement Agent enhancements required to support recurring reservations in ReservationSystem

2017-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192397#comment-16192397
 ] 

Hudson commented on YARN-5329:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13029 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13029/])
YARN-5329. Placement Agent enhancements required to support recurring (subru: 
rev e6e614e380ed1d746973b50f666a9c40d272073e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/IterativePlanner.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/StageAllocatorGreedyRLE.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/StageAllocatorGreedy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestInMemoryPlan.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/StageAllocator.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestReservationAgents.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/RLESparseResourceAllocation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestGreedyReservationAgent.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/BaseSharingPolicyTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/StageAllocatorLowCostAligned.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestCapacityOverTimePolicy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestAlignedPlanner.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/PeriodicRLESparseResourceAllocation.java


> Placement Agent enhancements required to support recurring reservations in 
> ReservationSystem
> 
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch, YARN-5329.v4.patch, 
> YARN-5329.v5.patch, YARN-5329.v6.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5329) Placement Agent enhancements required to support recurring reservations in ReservationSystem

2017-10-04 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5329:
-
Target Version/s:   (was: 2.9.0, 3.1.0)

> Placement Agent enhancements required to support recurring reservations in 
> ReservationSystem
> 
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch, YARN-5329.v4.patch, 
> YARN-5329.v5.patch, YARN-5329.v6.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5329) Placement Agent enhancements required to support recurring reservations in ReservationSystem

2017-10-04 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5329:
-
Summary: Placement Agent enhancements required to support recurring 
reservations in ReservationSystem  (was: ReservationAgent enhancements required 
to support recurring reservations in the YARN ReservationSystem)

> Placement Agent enhancements required to support recurring reservations in 
> ReservationSystem
> 
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch, YARN-5329.v4.patch, 
> YARN-5329.v5.patch, YARN-5329.v6.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192381#comment-16192381
 ] 

Hadoop QA commented on YARN-5329:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 46 unchanged - 10 fixed = 47 total (was 56) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-5329 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890462/YARN-5329.v6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 90b913ba5854 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cae1c73 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17786/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Updated] (YARN-7258) Add Node and Rack Hints to Opportunistic Scheduler

2017-10-04 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-7258:

Attachment: YARN-7258.003.patch

[~asuresh], reposting the patch as I have missed some commits in the earlier 
one which covers some more test cases.

> Add Node and Rack Hints to Opportunistic Scheduler
> --
>
> Key: YARN-7258
> URL: https://issues.apache.org/jira/browse/YARN-7258
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Attachments: YARN-7258.001.patch, YARN-7258.002.patch, 
> YARN-7258.003.patch
>
>
> Currently, the Opportunistic Scheduler ignores the node and rack information 
> and allocates strictly on the least loaded node (based on queue length) at 
> the time it received the request. This JIRA is to track changes needed to 
> allow the OpportunisticContainerAllocator to take the node/rack name as hints.
> The flow would be:
> # If requested node found in the top K leastLoaded nodes, allocate on that 
> node
> # Else, allocate on least loaded node on the same rack from the top K least 
> Loaded nodes.
> # Else, allocate on least loaded node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7258) Add Node and Rack Hints to Opportunistic Scheduler

2017-10-04 Thread kartheek muthyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kartheek muthyala updated YARN-7258:

Attachment: YARN-7258.002.patch

Thank you [~asuresh] for the comment. I am uploading the next version of the 
patch with the changes suggested.

> Add Node and Rack Hints to Opportunistic Scheduler
> --
>
> Key: YARN-7258
> URL: https://issues.apache.org/jira/browse/YARN-7258
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Attachments: YARN-7258.001.patch, YARN-7258.002.patch
>
>
> Currently, the Opportunistic Scheduler ignores the node and rack information 
> and allocates strictly on the least loaded node (based on queue length) at 
> the time it received the request. This JIRA is to track changes needed to 
> allow the OpportunisticContainerAllocator to take the node/rack name as hints.
> The flow would be:
> # If requested node found in the top K leastLoaded nodes, allocate on that 
> node
> # Else, allocate on least loaded node on the same rack from the top K least 
> Loaded nodes.
> # Else, allocate on least loaded node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Steven Rand (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand updated YARN-7290:
--
Attachment: YARN-7290.001.patch

Added a patch which I _think_ fixes both issues. All tests in 
{{TestFairSchedulerPreemption}} pass for me locally, including the new one, but 
the details here are tricky.

> canContainerBePreempted can return true when it shouldn't
> -
>
> Key: YARN-7290
> URL: https://issues.apache.org/jira/browse/YARN-7290
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Steven Rand
>Assignee: Steven Rand
> Attachments: YARN-7290.001.patch, YARN-7290-failing-test.patch
>
>
> In FSAppAttempt#canContainerBePreempted, we make sure that preempting the 
> given container would not put the app below its fair share:
> {code}
> // Check if the app's allocation will be over its fairshare even
> // after preempting this container
> Resource usageAfterPreemption = Resources.clone(getResourceUsage());
> // Subtract resources of containers already queued for preemption
> synchronized (preemptionVariablesLock) {
>   Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
> }
> // Subtract this container's allocation to compute usage after preemption
> Resources.subtractFrom(
> usageAfterPreemption, container.getAllocatedResource());
> return !isUsageBelowShare(usageAfterPreemption, getFairShare());
> {code}
> However, this only considers one container in isolation, and fails to 
> consider containers for the same app that we already added to 
> {{preemptableContainers}} in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. Therefore we can have a 
> case where we preempt multiple containers from the same app, none of which by 
> itself puts the app below fair share, but which cumulatively do so.
> I've attached a patch with a test to show this behavior. The flow is:
> 1. Initially greedyApp runs in {{root.preemptable.child-1}} and is allocated 
> all the resources (8g and 8vcores)
> 2. Then starvingApp runs in {{root.preemptable.child-2}} and requests 2 
> containers, each of which is 3g and 3vcores in size. At this point both 
> greedyApp and starvingApp have a fair share of 4g (with DRF not in use).
> 3. For the first container requested by starvedApp, we (correctly) preempt 3 
> containers from greedyApp, each of which is 1g and 1vcore.
> 4. For the second container requested by starvedApp, we again (this time 
> incorrectly) preempt 3 containers from greedyApp. This puts greedyApp below 
> its fair share, but happens anyway because all six times that we call 
> {{return !isUsageBelowShare(usageAfterPreemption, getFairShare());}}, the 
> value of {{usageAfterPreemption}} is 7g and 7vcores (confirmed using 
> debugger).
> So in addition to accounting for {{resourcesToBePreempted}}, we also need to 
> account for containers that we're already planning on preempting in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192326#comment-16192326
 ] 

Hadoop QA commented on YARN-7237:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  3s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 24 new + 185 unchanged - 1 fixed = 209 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
29s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7237 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Steven Rand (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192324#comment-16192324
 ] 

Steven Rand commented on YARN-7290:
---

An additional problem is that we call {{app.trackContainerForPreemption}} in 
{{preemptContainers}}, so after {{identifyContainersToPreempt}} has returned. 
Therefore after we've finished iterating through one container in the value of 
{{rr.getNumContainers()}}, we will have added some containers to 
{{containersToPreempt}}, but {{resourcesToBePreempted}} will not have been 
updated for any app. This allows subsequent calls to 
{{canContainerBePreempted}} in the same for loop to return {{true}} 
incorrectly, since we've already decided to preempt some containers, but the 
apps aren't aware of it yet.

> canContainerBePreempted can return true when it shouldn't
> -
>
> Key: YARN-7290
> URL: https://issues.apache.org/jira/browse/YARN-7290
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Steven Rand
>Assignee: Steven Rand
> Attachments: YARN-7290-failing-test.patch
>
>
> In FSAppAttempt#canContainerBePreempted, we make sure that preempting the 
> given container would not put the app below its fair share:
> {code}
> // Check if the app's allocation will be over its fairshare even
> // after preempting this container
> Resource usageAfterPreemption = Resources.clone(getResourceUsage());
> // Subtract resources of containers already queued for preemption
> synchronized (preemptionVariablesLock) {
>   Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
> }
> // Subtract this container's allocation to compute usage after preemption
> Resources.subtractFrom(
> usageAfterPreemption, container.getAllocatedResource());
> return !isUsageBelowShare(usageAfterPreemption, getFairShare());
> {code}
> However, this only considers one container in isolation, and fails to 
> consider containers for the same app that we already added to 
> {{preemptableContainers}} in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. Therefore we can have a 
> case where we preempt multiple containers from the same app, none of which by 
> itself puts the app below fair share, but which cumulatively do so.
> I've attached a patch with a test to show this behavior. The flow is:
> 1. Initially greedyApp runs in {{root.preemptable.child-1}} and is allocated 
> all the resources (8g and 8vcores)
> 2. Then starvingApp runs in {{root.preemptable.child-2}} and requests 2 
> containers, each of which is 3g and 3vcores in size. At this point both 
> greedyApp and starvingApp have a fair share of 4g (with DRF not in use).
> 3. For the first container requested by starvedApp, we (correctly) preempt 3 
> containers from greedyApp, each of which is 1g and 1vcore.
> 4. For the second container requested by starvedApp, we again (this time 
> incorrectly) preempt 3 containers from greedyApp. This puts greedyApp below 
> its fair share, but happens anyway because all six times that we call 
> {{return !isUsageBelowShare(usageAfterPreemption, getFairShare());}}, the 
> value of {{usageAfterPreemption}} is 7g and 7vcores (confirmed using 
> debugger).
> So in addition to accounting for {{resourcesToBePreempted}}, we also need to 
> account for containers that we're already planning on preempting in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-10-04 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192308#comment-16192308
 ] 

Vlad Rozov commented on YARN-6457:
--

bq. Also, we're setting all ssl.* properties in ssl-server.xml.
My point is that the behavior can't be changed implicitly, it needs to be 
documented (in JIRA) and changed explicitly if necessary.

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-10-04 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5329:
---
Attachment: YARN-5329.v6.patch

Fixing test case generation issue (rarely creates a non-valid input to the 
test, should be good now).

> ReservationAgent enhancements required to support recurring reservations in 
> the YARN ReservationSystem
> --
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch, YARN-5329.v4.patch, 
> YARN-5329.v5.patch, YARN-5329.v6.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-10-04 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-7262:

Target Version/s: 3.1.0  (was: 3.0.0)

> Add a hierarchy into the ZKRMStateStore for delegation token znodes to 
> prevent jute buffer overflow
> ---
>
> Key: YARN-7262
> URL: https://issues.apache.org/jira/browse/YARN-7262
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7262.001.patch, YARN-7262.002.patch
>
>
> We've seen users who are running into a problem where the RM is storing so 
> many delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those 
> znodes is higher than the jute buffer. This is fine during operations, but 
> becomes a problem on a fail over because the RM will try to read in all of 
> the token znodes (i.e. call {{getChildren}} on the parent znode).  This is 
> particularly bad because everything appears to be okay, but then if a 
> failover occurs you end up with no active RMs.
> There was a similar problem with the Yarn application data that was fixed in 
> YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull 
> subchildren without overflowing the jute buffer (though it's off by default).
> We should add a hierarchy similar to that of YARN-2962, but for the 
> delegation token znodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-10-04 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192293#comment-16192293
 ] 

Robert Kanter commented on YARN-6457:
-

That didn't seem to be handling it.  Let me try it again and see what I find.

Also, we're setting all {{ssl.*}} properties in ssl-server.xml.

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7262) Add a hierarchy into the ZKRMStateStore for delegation token znodes to prevent jute buffer overflow

2017-10-04 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-7262:

Attachment: YARN-7262.002.patch

Thanks for the feedback [~templedf].  
I've had a chance to actually use it in a real cluster and everything looks 
good.

{quote}The new property and default should have javadocs{quote}
It is documented in yarn-default.xml and most of the other properties in 
{{YarnConfiguration}} don't have Javadocs.

The 002 patch:
- I changed my {{null !=}} - that's what I get for copy-pasting existing code.
- Replaced all {{Assert.assertX}} with simply {{assertX}}
- Added messages to some assert statements
- Added tests for split index 2, 3, and 4.
- No longer stores {{token3}}
- {{initInternal}} now considers 0 a valid value.  I also fixed that for the 
app split index config.
- Made the "Unknown child node with name" message more descriptive, moved it to 
the debug level, and updated it to not erroneously complain about the "1", "2", 
"3", and "4" znodes.  I also made similar improvements for the similar code 
used for app spliting.
- Updated {{loadDelegationTokenFromNode}} to use {{else}} instead of early 
{{return}}
- Introduced a new variable in {{getLeafZnodePath}} instead of reusing 
{{splitIdx}}
- Split the long line in {{RMStateStore}}

> Add a hierarchy into the ZKRMStateStore for delegation token znodes to 
> prevent jute buffer overflow
> ---
>
> Key: YARN-7262
> URL: https://issues.apache.org/jira/browse/YARN-7262
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7262.001.patch, YARN-7262.002.patch
>
>
> We've seen users who are running into a problem where the RM is storing so 
> many delegation tokens in the {{ZKRMStateStore}} that the _listing_ of those 
> znodes is higher than the jute buffer. This is fine during operations, but 
> becomes a problem on a fail over because the RM will try to read in all of 
> the token znodes (i.e. call {{getChildren}} on the parent znode).  This is 
> particularly bad because everything appears to be okay, but then if a 
> failover occurs you end up with no active RMs.
> There was a similar problem with the Yarn application data that was fixed in 
> YARN-2962 by adding a (configurable) hierarchy of znodes so the RM could pull 
> subchildren without overflowing the jute buffer (though it's off by default).
> We should add a hierarchy similar to that of YARN-2962, but for the 
> delegation token znodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-10-04 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192262#comment-16192262
 ] 

Vlad Rozov commented on YARN-6457:
--

If passed {{conf}} is not {{null}}, should not the following code handle your 
case?
{code}
if (conf != null) {
  sslConf.addResource(conf);
}
{code}

How HDFS HA + SSL + Hadoop credential store worked prior to YARN-4562 was fixed?

The issue is that prior to this and YARN-4562 fix, 
{{"ssl.server.truststore.location"}} and other properties that are specific to 
{{ssl-server.xml}} were ignored if set in {{yarn-site.xml}} and only loaded 
from {{ssl-server.xml}}. Whether it was an intentional behavior or a bug, needs 
to be discussed. The behavior should not change simply as a side effect of this 
JIRA fix. 


> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Steven Rand (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand reassigned YARN-7290:
-

Assignee: Steven Rand

> canContainerBePreempted can return true when it shouldn't
> -
>
> Key: YARN-7290
> URL: https://issues.apache.org/jira/browse/YARN-7290
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Steven Rand
>Assignee: Steven Rand
> Attachments: YARN-7290-failing-test.patch
>
>
> In FSAppAttempt#canContainerBePreempted, we make sure that preempting the 
> given container would not put the app below its fair share:
> {code}
> // Check if the app's allocation will be over its fairshare even
> // after preempting this container
> Resource usageAfterPreemption = Resources.clone(getResourceUsage());
> // Subtract resources of containers already queued for preemption
> synchronized (preemptionVariablesLock) {
>   Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
> }
> // Subtract this container's allocation to compute usage after preemption
> Resources.subtractFrom(
> usageAfterPreemption, container.getAllocatedResource());
> return !isUsageBelowShare(usageAfterPreemption, getFairShare());
> {code}
> However, this only considers one container in isolation, and fails to 
> consider containers for the same app that we already added to 
> {{preemptableContainers}} in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. Therefore we can have a 
> case where we preempt multiple containers from the same app, none of which by 
> itself puts the app below fair share, but which cumulatively do so.
> I've attached a patch with a test to show this behavior. The flow is:
> 1. Initially greedyApp runs in {{root.preemptable.child-1}} and is allocated 
> all the resources (8g and 8vcores)
> 2. Then starvingApp runs in {{root.preemptable.child-2}} and requests 2 
> containers, each of which is 3g and 3vcores in size. At this point both 
> greedyApp and starvingApp have a fair share of 4g (with DRF not in use).
> 3. For the first container requested by starvedApp, we (correctly) preempt 3 
> containers from greedyApp, each of which is 1g and 1vcore.
> 4. For the second container requested by starvedApp, we again (this time 
> incorrectly) preempt 3 containers from greedyApp. This puts greedyApp below 
> its fair share, but happens anyway because all six times that we call 
> {{return !isUsageBelowShare(usageAfterPreemption, getFairShare());}}, the 
> value of {{usageAfterPreemption}} is 7g and 7vcores (confirmed using 
> debugger).
> So in addition to accounting for {{resourcesToBePreempted}}, we also need to 
> account for containers that we're already planning on preempting in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192234#comment-16192234
 ] 

Hadoop QA commented on YARN-6620:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 25s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 81 new + 479 unchanged - 24 fixed = 560 total (was 503) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
27s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 523 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
31s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6620 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-2960) Add documentation for the YARN shared cache

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192226#comment-16192226
 ] 

Hadoop QA commented on YARN-2960:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-2960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890452/YARN-2960-trunk-004.patch
 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux 99eedfef41fc 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cae1c73 |
| modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17784/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch, YARN-2960-trunk-002.patch, 
> YARN-2960-trunk-003.patch, YARN-2960-trunk-004.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7285) ContainerExecutor always launches with priorities due to yarn-default property

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192223#comment-16192223
 ] 

Hadoop QA commented on YARN-7285:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
48s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
36s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7285 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890450/YARN-7285.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 6053f31faeac 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cae1c73 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (YARN-2960) Add documentation for the YARN shared cache

2017-10-04 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192221#comment-16192221
 ] 

Chris Trezzo commented on YARN-2960:


Thanks [~mingma]! I will commit to trunk, branch-3.0 and branch-2.

> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch, YARN-2960-trunk-002.patch, 
> YARN-2960-trunk-003.patch, YARN-2960-trunk-004.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2037) Add work preserving restart support for Unmanaged AMs

2017-10-04 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192218#comment-16192218
 ] 

Botong Huang commented on YARN-2037:


Thanks [~subru] and everyone else for the comments and review!

> Add work preserving restart support for Unmanaged AMs
> -
>
> Key: YARN-2037
> URL: https://issues.apache.org/jira/browse/YARN-2037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Botong Huang
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-2037-branch-2.v1.patch, YARN-2037.v1.patch, 
> YARN-2037.v2.patch, YARN-2037.v3.patch, YARN-2037.v4.patch
>
>
> It would be nice to allow Unmanaged AMs also to restart in a work-preserving 
> way. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2960) Add documentation for the YARN shared cache

2017-10-04 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192216#comment-16192216
 ] 

Ming Ma commented on YARN-2960:
---

+1

> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch, YARN-2960-trunk-002.patch, 
> YARN-2960-trunk-003.patch, YARN-2960-trunk-004.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-7290:
--

Assignee: (was: Steven Rand)

> canContainerBePreempted can return true when it shouldn't
> -
>
> Key: YARN-7290
> URL: https://issues.apache.org/jira/browse/YARN-7290
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Steven Rand
> Attachments: YARN-7290-failing-test.patch
>
>
> In FSAppAttempt#canContainerBePreempted, we make sure that preempting the 
> given container would not put the app below its fair share:
> {code}
> // Check if the app's allocation will be over its fairshare even
> // after preempting this container
> Resource usageAfterPreemption = Resources.clone(getResourceUsage());
> // Subtract resources of containers already queued for preemption
> synchronized (preemptionVariablesLock) {
>   Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
> }
> // Subtract this container's allocation to compute usage after preemption
> Resources.subtractFrom(
> usageAfterPreemption, container.getAllocatedResource());
> return !isUsageBelowShare(usageAfterPreemption, getFairShare());
> {code}
> However, this only considers one container in isolation, and fails to 
> consider containers for the same app that we already added to 
> {{preemptableContainers}} in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. Therefore we can have a 
> case where we preempt multiple containers from the same app, none of which by 
> itself puts the app below fair share, but which cumulatively do so.
> I've attached a patch with a test to show this behavior. The flow is:
> 1. Initially greedyApp runs in {{root.preemptable.child-1}} and is allocated 
> all the resources (8g and 8vcores)
> 2. Then starvingApp runs in {{root.preemptable.child-2}} and requests 2 
> containers, each of which is 3g and 3vcores in size. At this point both 
> greedyApp and starvingApp have a fair share of 4g (with DRF not in use).
> 3. For the first container requested by starvedApp, we (correctly) preempt 3 
> containers from greedyApp, each of which is 1g and 1vcore.
> 4. For the second container requested by starvedApp, we again (this time 
> incorrectly) preempt 3 containers from greedyApp. This puts greedyApp below 
> its fair share, but happens anyway because all six times that we call 
> {{return !isUsageBelowShare(usageAfterPreemption, getFairShare());}}, the 
> value of {{usageAfterPreemption}} is 7g and 7vcores (confirmed using 
> debugger).
> So in addition to accounting for {{resourcesToBePreempted}}, we also need to 
> account for containers that we're already planning on preempting in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6541) Optimize PeriodicRLESparseResourceAllocation by auto-expanding it's time period

2017-10-04 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6541:
-
Issue Type: Improvement  (was: Sub-task)
Parent: (was: YARN-5326)

> Optimize PeriodicRLESparseResourceAllocation by auto-expanding it's time 
> period
> ---
>
> Key: YARN-6541
> URL: https://issues.apache.org/jira/browse/YARN-6541
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Subru Krishnan
>
> YARN-5531 adds a the specified {{PeriodicRLESparseResourceAllocation}} that 
> represents periodic allocation of resources.  It seeds the period directly 
> using the max timePeriod which will result in storing multiple instances if 
> the user requested periods are small. For e.g: 24 instances of an hourly job 
> if timePeriod is one day. We need the max to prevent unbounded time period if 
> user decides to input prime numbers but we can auto-expand instead of 
> statically fixing it as the period



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192208#comment-16192208
 ] 

Hadoop QA commented on YARN-5329:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 45 unchanged - 10 fixed = 45 total (was 55) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-5329 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890444/YARN-5329.v5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 538df0543fac 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cae1c73 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17780/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17780/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Updated] (YARN-5328) Plan/ResourceAllocation data structure enhancements required to support recurring reservations in ReservationSystem

2017-10-04 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5328:
-
Fix Version/s: 3.0.0

> Plan/ResourceAllocation data structure enhancements required to support 
> recurring reservations in ReservationSystem
> ---
>
> Key: YARN-5328
> URL: https://issues.apache.org/jira/browse/YARN-5328
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-5328-branch-2-v9.patch, YARN-5328-v1.patch, 
> YARN-5328-v2.patch, YARN-5328-v3.patch, YARN-5328-v4.patch, 
> YARN-5328-v5.patch, YARN-5328-v6.patch, YARN-5328-v7.patch, 
> YARN-5328-v8.patch, YARN-5328-v9-branch-2.patch, YARN-5328-v9.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in InMemoryPlan to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5330) SharingPolicy enhancements required to support recurring reservations in ReservationSystem

2017-10-04 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5330:
-
Fix Version/s: 3.0.0

> SharingPolicy enhancements required to support recurring reservations in 
> ReservationSystem
> --
>
> Key: YARN-5330
> URL: https://issues.apache.org/jira/browse/YARN-5330
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-5330.v0.patch, YARN-5330.v1.patch, 
> YARN-5330.v2.patch, YARN-5330.v3.patch, YARN-5330.v4.patch, YARN-5330.v5.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in SharingPolicy to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Steven Rand (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand updated YARN-7290:
--
Attachment: YARN-7290-failing-test.patch

> canContainerBePreempted can return true when it shouldn't
> -
>
> Key: YARN-7290
> URL: https://issues.apache.org/jira/browse/YARN-7290
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Steven Rand
> Attachments: YARN-7290-failing-test.patch
>
>
> In FSAppAttempt#canContainerBePreempted, we make sure that preempting the 
> given container would not put the app below its fair share:
> {code}
> // Check if the app's allocation will be over its fairshare even
> // after preempting this container
> Resource usageAfterPreemption = Resources.clone(getResourceUsage());
> // Subtract resources of containers already queued for preemption
> synchronized (preemptionVariablesLock) {
>   Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
> }
> // Subtract this container's allocation to compute usage after preemption
> Resources.subtractFrom(
> usageAfterPreemption, container.getAllocatedResource());
> return !isUsageBelowShare(usageAfterPreemption, getFairShare());
> {code}
> However, this only considers one container in isolation, and fails to 
> consider containers for the same app that we already added to 
> {{preemptableContainers}} in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. Therefore we can have a 
> case where we preempt multiple containers from the same app, none of which by 
> itself puts the app below fair share, but which cumulatively do so.
> I've attached a patch with a test to show this behavior. The flow is:
> 1. Initially greedyApp runs in {{root.preemptable.child-1}} and is allocated 
> all the resources (8g and 8vcores)
> 2. Then starvingApp runs in {{root.preemptable.child-2}} and requests 2 
> containers, each of which is 3g and 3vcores in size. At this point both 
> greedyApp and starvingApp have a fair share of 4g (with DRF not in use).
> 3. For the first container requested by starvedApp, we (correctly) preempt 3 
> containers from greedyApp, each of which is 1g and 1vcore.
> 4. For the second container requested by starvedApp, we again (this time 
> incorrectly) preempt 3 containers from greedyApp. This puts greedyApp below 
> its fair share, but happens anyway because all six times that we call 
> {{return !isUsageBelowShare(usageAfterPreemption, getFairShare());}}, the 
> value of {{usageAfterPreemption}} is 7g and 7vcores (confirmed using 
> debugger).
> So in addition to accounting for {{resourcesToBePreempted}}, we also need to 
> account for containers that we're already planning on preempting in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-7290:
--

Assignee: Steven Rand

> canContainerBePreempted can return true when it shouldn't
> -
>
> Key: YARN-7290
> URL: https://issues.apache.org/jira/browse/YARN-7290
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-beta1
>Reporter: Steven Rand
>Assignee: Steven Rand
> Attachments: YARN-7290-failing-test.patch
>
>
> In FSAppAttempt#canContainerBePreempted, we make sure that preempting the 
> given container would not put the app below its fair share:
> {code}
> // Check if the app's allocation will be over its fairshare even
> // after preempting this container
> Resource usageAfterPreemption = Resources.clone(getResourceUsage());
> // Subtract resources of containers already queued for preemption
> synchronized (preemptionVariablesLock) {
>   Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
> }
> // Subtract this container's allocation to compute usage after preemption
> Resources.subtractFrom(
> usageAfterPreemption, container.getAllocatedResource());
> return !isUsageBelowShare(usageAfterPreemption, getFairShare());
> {code}
> However, this only considers one container in isolation, and fails to 
> consider containers for the same app that we already added to 
> {{preemptableContainers}} in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. Therefore we can have a 
> case where we preempt multiple containers from the same app, none of which by 
> itself puts the app below fair share, but which cumulatively do so.
> I've attached a patch with a test to show this behavior. The flow is:
> 1. Initially greedyApp runs in {{root.preemptable.child-1}} and is allocated 
> all the resources (8g and 8vcores)
> 2. Then starvingApp runs in {{root.preemptable.child-2}} and requests 2 
> containers, each of which is 3g and 3vcores in size. At this point both 
> greedyApp and starvingApp have a fair share of 4g (with DRF not in use).
> 3. For the first container requested by starvedApp, we (correctly) preempt 3 
> containers from greedyApp, each of which is 1g and 1vcore.
> 4. For the second container requested by starvedApp, we again (this time 
> incorrectly) preempt 3 containers from greedyApp. This puts greedyApp below 
> its fair share, but happens anyway because all six times that we call 
> {{return !isUsageBelowShare(usageAfterPreemption, getFairShare());}}, the 
> value of {{usageAfterPreemption}} is 7g and 7vcores (confirmed using 
> debugger).
> So in addition to accounting for {{resourcesToBePreempted}}, we also need to 
> account for containers that we're already planning on preempting in 
> FSPreemptionThread#identifyContainersToPreemptOnNode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7290) canContainerBePreempted can return true when it shouldn't

2017-10-04 Thread Steven Rand (JIRA)
Steven Rand created YARN-7290:
-

 Summary: canContainerBePreempted can return true when it shouldn't
 Key: YARN-7290
 URL: https://issues.apache.org/jira/browse/YARN-7290
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 3.0.0-beta1
Reporter: Steven Rand


In FSAppAttempt#canContainerBePreempted, we make sure that preempting the given 
container would not put the app below its fair share:

{code}
// Check if the app's allocation will be over its fairshare even
// after preempting this container
Resource usageAfterPreemption = Resources.clone(getResourceUsage());

// Subtract resources of containers already queued for preemption
synchronized (preemptionVariablesLock) {
  Resources.subtractFrom(usageAfterPreemption, resourcesToBePreempted);
}

// Subtract this container's allocation to compute usage after preemption
Resources.subtractFrom(
usageAfterPreemption, container.getAllocatedResource());
return !isUsageBelowShare(usageAfterPreemption, getFairShare());
{code}

However, this only considers one container in isolation, and fails to consider 
containers for the same app that we already added to {{preemptableContainers}} 
in FSPreemptionThread#identifyContainersToPreemptOnNode. Therefore we can have 
a case where we preempt multiple containers from the same app, none of which by 
itself puts the app below fair share, but which cumulatively do so.

I've attached a patch with a test to show this behavior. The flow is:

1. Initially greedyApp runs in {{root.preemptable.child-1}} and is allocated 
all the resources (8g and 8vcores)
2. Then starvingApp runs in {{root.preemptable.child-2}} and requests 2 
containers, each of which is 3g and 3vcores in size. At this point both 
greedyApp and starvingApp have a fair share of 4g (with DRF not in use).
3. For the first container requested by starvedApp, we (correctly) preempt 3 
containers from greedyApp, each of which is 1g and 1vcore.
4. For the second container requested by starvedApp, we again (this time 
incorrectly) preempt 3 containers from greedyApp. This puts greedyApp below its 
fair share, but happens anyway because all six times that we call {{return 
!isUsageBelowShare(usageAfterPreemption, getFairShare());}}, the value of 
{{usageAfterPreemption}} is 7g and 7vcores (confirmed using debugger).

So in addition to accounting for {{resourcesToBePreempted}}, we also need to 
account for containers that we're already planning on preempting in 
FSPreemptionThread#identifyContainersToPreemptOnNode. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2960) Add documentation for the YARN shared cache

2017-10-04 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-2960:
---
Attachment: YARN-2960-trunk-004.patch

Attached v4 to add italics around parameters in the setup instructions.

> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch, YARN-2960-trunk-002.patch, 
> YARN-2960-trunk-003.patch, YARN-2960-trunk-004.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7285) ContainerExecutor always launches with priorities due to yarn-default property

2017-10-04 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7285:
-
Attachment: YARN-7285.002.patch

Updated the patch to remove the commented-out property value.

> ContainerExecutor always launches with priorities due to yarn-default property
> --
>
> Key: YARN-7285
> URL: https://issues.apache.org/jira/browse/YARN-7285
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Minor
> Attachments: YARN-7285.001.patch, YARN-7285.002.patch
>
>
> ContainerExecutor will launch containers with a specified priority if a 
> priority adjustment is specified, otherwise with the OS default priority if 
> it is unspecified.  YARN-3069 added 
> yarn.nodemanager.container-executor.os.sched.priority.adjustment to 
> yarn-default.xml, so it is always specified even if the user did not 
> explicitly set it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-10-04 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192159#comment-16192159
 ] 

Robert Kanter commented on YARN-6457:
-

{quote}was not loadDefaults set to false prior to the patch as well?{quote}
It was.  However, it only called that code before when the passed in 
Configuration was {{null}}.  Now it always does it.
{code:java}
if (sslConf == null) {  
   sslConf = new Configuration(false);
}
{code}
vs
{code:java}
Configuration sslConf = new Configuration(false);
{code}
The passed in config in the code path I'm interested in is not {{null}}, so it 
actually did not create the new Configuration in the original version.

In any case, I tried using HDFS HA + SSL + Hadoop Credstore after reverting 
YARN-6457 (so the original code was used), and everything works fine.  So this 
JIRA definitely affects this use case.

{quote}In case you plan to change loadDefaults, please see my prior comments 
regarding "ssl.server.truststore.location"{quote}
Could you please clarify?  I looked back at the earlier comments and I'm still 
not understanding the issue.

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-10-04 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6620:
-
Attachment: YARN-6620.013.patch

Attached ver.13 patch, addressed javadocs warnings.

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6620.001.patch, YARN-6620.002.patch, 
> YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch, 
> YARN-6620.006-WIP.patch, YARN-6620.007.patch, YARN-6620.008.patch, 
> YARN-6620.009.patch, YARN-6620.010.patch, YARN-6620.011.patch, 
> YARN-6620.012.patch, YARN-6620.013.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192146#comment-16192146
 ] 

Hadoop QA commented on YARN-7202:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
33s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
28s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} root: The patch generated 0 new + 7 unchanged - 3 
fixed = 7 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Updated] (YARN-7237) Cleanup usages of ResourceProfiles

2017-10-04 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7237:
-
Attachment: YARN-7237.005.patch

Last patch accidentally included some unrelated file changes, uploaded ver.5 
patch.

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch, YARN-7237.002.patch, 
> YARN-7237.003.patch, YARN-7237.004.patch, YARN-7237.005.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-10-04 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192120#comment-16192120
 ] 

Carlo Curino commented on YARN-5329:


Fixed checkstyle/whitespace. Unit test failure is unrelated.

> ReservationAgent enhancements required to support recurring reservations in 
> the YARN ReservationSystem
> --
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch, YARN-5329.v4.patch, YARN-5329.v5.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-10-04 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5329:
---
Attachment: YARN-5329.v5.patch

> ReservationAgent enhancements required to support recurring reservations in 
> the YARN ReservationSystem
> --
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch, YARN-5329.v4.patch, YARN-5329.v5.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7237) Cleanup usages of ResourceProfiles

2017-10-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192101#comment-16192101
 ] 

Wangda Tan commented on YARN-7237:
--

[~templedf], could you check the latest patch?

> Cleanup usages of ResourceProfiles
> --
>
> Key: YARN-7237
> URL: https://issues.apache.org/jira/browse/YARN-7237
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7237.001.patch, YARN-7237.002.patch, 
> YARN-7237.003.patch, YARN-7237.004.patch
>
>
> While doing tests, there're a couple of issues:
> 1) When use {{ProfileCapability#getProfileCapabilityOverride}}, it does 
> overwrite of whatever specified in resource-profiles.json when value >= 0. 
> Which is different from javadocs of {{ProfileCapability}} 
> bq. For example, if you have a resource profile "small" that maps to <4096M, 
> 2 cores, 1 gpu> and you set the capability override to <8192M, 0 cores, 0 
> gpu>, then the actual resource allocation on the ResourceManager will be 
> <8192M, 2 cores, 1 gpu>
> To me, the correct behavior should do overwrite when value > 0. The reason 
> is, by default resource value will be set to 0, For example, assume we have a 
> profile {{"a" = (mem=3, vcore=5, res_1=7)}}, and create a 
> capability-overwrite (capability = new resource(8). The final result should 
> be (mem=8, vcore=5, res_1=7), instead of (mem=8, vcore=0, res_1=0).
> 2) ResourceProfileManager now loads minimum/maximum profile from config file 
> (resource-profiles.json), to me this is not correct because minimum/maximum 
> allocation for each resource types are already specified inside 
> {{resource-types.xml}}. We should always use 
> {{ResourceUtils#getResourceTypesMinimum/MaximumAllocation}} to get from 
> resource-types.xml and yarn-site.xml. This value will be added to profiles so 
> client can get these configs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192086#comment-16192086
 ] 

Hadoop QA commented on YARN-5329:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 5 new + 45 unchanged - 10 fixed = 50 total (was 55) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-5329 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890430/YARN-5329.v4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 112427895c5a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17776/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/17776/artifact/patchprocess/whitespace-eol.txt

[jira] [Commented] (YARN-7044) TestContainerAllocation#testAMContainerAllocationWhenDNSUnavailable fails

2017-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192057#comment-16192057
 ] 

Hudson commented on YARN-7044:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13025 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13025/])
YARN-7044. (aajisaka: rev 2df1b2ac0509ba10fff606ada7e9b3562c12dd16)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java


> TestContainerAllocation#testAMContainerAllocationWhenDNSUnavailable fails
> -
>
> Key: YARN-7044
> URL: https://issues.apache.org/jira/browse/YARN-7044
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, test
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Akira Ajisaka
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7044.001.patch
>
>
> {code}
> Failed
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable
> Failing for the past 2 builds (Since Failed#16961 )
> Took 30 sec.
> Error Message
> test timed out after 3 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable(TestContainerAllocation.java:330)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7122) TestReservationSystemInvariants and TestSLSRunner fail

2017-10-04 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved YARN-7122.
-
Resolution: Duplicate

> TestReservationSystemInvariants and TestSLSRunner fail
> --
>
> Key: YARN-7122
> URL: https://issues.apache.org/jira/browse/YARN-7122
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-7122.001.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.sls.TestReservationSystemInvariants
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 12.528 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.sls.TestReservationSystemInvariants
> testSimulatorRunning[Testing with: SYNTH, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler,
>  (nodeFile null)](org.apache.hadoop.yarn.sls.TestReservationSystemInvariants) 
>  Time elapsed: 8.845 sec  <<< FAILURE!
> java.lang.AssertionError: TestSLSRunner catched exception from child thread 
> (TaskRunner.Task): [java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException]
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.sls.BaseSLSRunnerTest.runSLS(BaseSLSRunnerTest.java:127)
>   at 
> org.apache.hadoop.yarn.sls.TestReservationSystemInvariants.testSimulatorRunning(TestReservationSystemInvariants.java:69)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7122) TestReservationSystemInvariants and TestSLSRunner fail

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192046#comment-16192046
 ] 

Hadoop QA commented on YARN-7122:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-7122 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7122 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890438/YARN-7122.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17779/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestReservationSystemInvariants and TestSLSRunner fail
> --
>
> Key: YARN-7122
> URL: https://issues.apache.org/jira/browse/YARN-7122
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-7122.001.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.sls.TestReservationSystemInvariants
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 12.528 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.sls.TestReservationSystemInvariants
> testSimulatorRunning[Testing with: SYNTH, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler,
>  (nodeFile null)](org.apache.hadoop.yarn.sls.TestReservationSystemInvariants) 
>  Time elapsed: 8.845 sec  <<< FAILURE!
> java.lang.AssertionError: TestSLSRunner catched exception from child thread 
> (TaskRunner.Task): [java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException]
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.sls.BaseSLSRunnerTest.runSLS(BaseSLSRunnerTest.java:127)
>   at 
> org.apache.hadoop.yarn.sls.TestReservationSystemInvariants.testSimulatorRunning(TestReservationSystemInvariants.java:69)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7044) TestContainerAllocation#testAMContainerAllocationWhenDNSUnavailable fails

2017-10-04 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-7044:

Summary: 
TestContainerAllocation#testAMContainerAllocationWhenDNSUnavailable fails  
(was: TestContainerAllocation#testAMContainerAllocationWhenDNSUnavailable fails 
on trunk)

> TestContainerAllocation#testAMContainerAllocationWhenDNSUnavailable fails
> -
>
> Key: YARN-7044
> URL: https://issues.apache.org/jira/browse/YARN-7044
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, test
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Akira Ajisaka
> Attachments: YARN-7044.001.patch
>
>
> {code}
> Failed
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable
> Failing for the past 2 builds (Since Failed#16961 )
> Took 30 sec.
> Error Message
> test timed out after 3 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 3 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable(TestContainerAllocation.java:330)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7122) TestReservationSystemInvariants and TestSLSRunner fail

2017-10-04 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned YARN-7122:
---

Assignee: Akira Ajisaka
Target Version/s: 2.9.0, 2.8.2, 3.0.0

> TestReservationSystemInvariants and TestSLSRunner fail
> --
>
> Key: YARN-7122
> URL: https://issues.apache.org/jira/browse/YARN-7122
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-7122.001.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.sls.TestReservationSystemInvariants
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 12.528 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.sls.TestReservationSystemInvariants
> testSimulatorRunning[Testing with: SYNTH, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler,
>  (nodeFile null)](org.apache.hadoop.yarn.sls.TestReservationSystemInvariants) 
>  Time elapsed: 8.845 sec  <<< FAILURE!
> java.lang.AssertionError: TestSLSRunner catched exception from child thread 
> (TaskRunner.Task): [java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException]
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.sls.BaseSLSRunnerTest.runSLS(BaseSLSRunnerTest.java:127)
>   at 
> org.apache.hadoop.yarn.sls.TestReservationSystemInvariants.testSimulatorRunning(TestReservationSystemInvariants.java:69)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7122) TestReservationSystemInvariants and TestSLSRunner fail

2017-10-04 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-7122:

Attachment: YARN-7122.001.patch

After YARN-6640, AM throws InvalidApplicationMasterRequestException if when 
request.responseId > lastResponseId. In AMSimulator, the response id of the 
request is wrongly initialized to 1, and it passed unintentionally before 
YARN-6640.

001 patch
* Initialize the response id to 0
* Avoid incrementing the response id to avoid overflow

> TestReservationSystemInvariants and TestSLSRunner fail
> --
>
> Key: YARN-7122
> URL: https://issues.apache.org/jira/browse/YARN-7122
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
> Attachments: YARN-7122.001.patch
>
>
> {noformat}
> Running org.apache.hadoop.yarn.sls.TestReservationSystemInvariants
> Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 12.528 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.sls.TestReservationSystemInvariants
> testSimulatorRunning[Testing with: SYNTH, 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler,
>  (nodeFile null)](org.apache.hadoop.yarn.sls.TestReservationSystemInvariants) 
>  Time elapsed: 8.845 sec  <<< FAILURE!
> java.lang.AssertionError: TestSLSRunner catched exception from child thread 
> (TaskRunner.Task): [java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException, 
> java.lang.reflect.UndeclaredThrowableException]
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.yarn.sls.BaseSLSRunnerTest.runSLS(BaseSLSRunnerTest.java:127)
>   at 
> org.apache.hadoop.yarn.sls.TestReservationSystemInvariants.testSimulatorRunning(TestReservationSystemInvariants.java:69)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2960) Add documentation for the YARN shared cache

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192010#comment-16192010
 ] 

Hadoop QA commented on YARN-2960:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-2960 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890426/YARN-2960-trunk-003.patch
 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux 3cccb3181b23 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/1/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch, YARN-2960-trunk-002.patch, 
> YARN-2960-trunk-003.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-10-04 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192006#comment-16192006
 ] 

Vlad Rozov edited comment on YARN-6457 at 10/4/17 8:47 PM:
---

[~rkanter] I don't see how this JIRA affects your use case, was not 
{{loadDefaults}} set to {{false}} prior to the patch as well? In case you plan 
to change {{loadDefaults}}, please see my prior comments regarding 
{{"ssl.server.truststore.location"}}


was (Author: vrozov):
[~rkanter] I don't see how this JIRA affects your use case, was not 
{{loadDefaults}} was set to {{false}} prior to the patch as well? In case you 
plan to change {{loadDefaults}}, please see my prior comments regarding 
{{"ssl.server.truststore.location"}}

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-10-04 Thread Vlad Rozov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16192006#comment-16192006
 ] 

Vlad Rozov commented on YARN-6457:
--

[~rkanter] I don't see how this JIRA affects your use case, was not 
{{loadDefaults}} was set to {{false}} prior to the patch as well? In case you 
plan to change {{loadDefaults}}, please see my prior comments regarding 
{{"ssl.server.truststore.location"}}

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-04 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191993#comment-16191993
 ] 

Miklos Szegedi edited comment on YARN-7289 at 10/4/17 8:35 PM:
---

The other issues are be unrelated. This is just a timeout change.
The corresponding bugs are YARN-5652, YARN-6747


was (Author: miklos.szeg...@cloudera.com):
The other issues are be unrelated. This is just a timeout change.
The first one is YARN-5652

> TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
> ---
>
> Key: YARN-7289
> URL: https://issues.apache.org/jira/browse/YARN-7289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7289.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-04 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191993#comment-16191993
 ] 

Miklos Szegedi edited comment on YARN-7289 at 10/4/17 8:33 PM:
---

The other issues are be unrelated. This is just a timeout change.
The first one is YARN-5652


was (Author: miklos.szeg...@cloudera.com):
The other issues should be unrelated. This is just a timeout change.

> TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
> ---
>
> Key: YARN-7289
> URL: https://issues.apache.org/jira/browse/YARN-7289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7289.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-04 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191993#comment-16191993
 ] 

Miklos Szegedi commented on YARN-7289:
--

The other issues should be unrelated. This is just a timeout change.

> TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
> ---
>
> Key: YARN-7289
> URL: https://issues.apache.org/jira/browse/YARN-7289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7289.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6747) TestFSAppStarvation.testPreemptionEnable fails intermittently

2017-10-04 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191988#comment-16191988
 ] 

Miklos Szegedi edited comment on YARN-6747 at 10/4/17 8:29 PM:
---

Both unit test issues are timeouts. In case of the former:
{code}
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation
testAMContainerAllocationWhenDNSUnavailable(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation)
  Time elapsed: 29.72 sec  <<< ERROR!
java.lang.Exception: test timed out after 3 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable(TestContainerAllocation.java:331)
{code}



was (Author: miklos.szeg...@cloudera.com):
Both unit test issues are timeouts:
{code}
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation
testAMContainerAllocationWhenDNSUnavailable(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation)
  Time elapsed: 29.72 sec  <<< ERROR!
java.lang.Exception: test timed out after 3 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable(TestContainerAllocation.java:331)
{code}


> TestFSAppStarvation.testPreemptionEnable fails intermittently
> -
>
> Key: YARN-6747
> URL: https://issues.apache.org/jira/browse/YARN-6747
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sunil G
>Assignee: Miklos Szegedi
> Attachments: YARN-6747.000.patch
>
>
> *Error Message*
> Apps re-added even before starvation delay passed expected:<4> but was:<3>
> *Stacktrace*
> java.lang.AssertionError: Apps re-added even before starvation delay passed 
> expected:<4> but was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation.testPreemptionEnabled(TestFSAppStarvation.java:117)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6747) TestFSAppStarvation.testPreemptionEnable fails intermittently

2017-10-04 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191988#comment-16191988
 ] 

Miklos Szegedi commented on YARN-6747:
--

Both unit test issues are timeouts:
{code}
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation
testAMContainerAllocationWhenDNSUnavailable(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation)
  Time elapsed: 29.72 sec  <<< ERROR!
java.lang.Exception: test timed out after 3 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable(TestContainerAllocation.java:331)
{code}


> TestFSAppStarvation.testPreemptionEnable fails intermittently
> -
>
> Key: YARN-6747
> URL: https://issues.apache.org/jira/browse/YARN-6747
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sunil G
>Assignee: Miklos Szegedi
> Attachments: YARN-6747.000.patch
>
>
> *Error Message*
> Apps re-added even before starvation delay passed expected:<4> but was:<3>
> *Stacktrace*
> java.lang.AssertionError: Apps re-added even before starvation delay passed 
> expected:<4> but was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation.testPreemptionEnabled(TestFSAppStarvation.java:117)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7202) End-to-end UT for api-server

2017-10-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7202:

Attachment: (was: YARN-7202.yarn-native-services.007.patch)

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7202) End-to-end UT for api-server

2017-10-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7202:

Attachment: YARN-7202.yarn-native-services.007.patch

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-10-04 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5329:
---
Attachment: YARN-5329.v4.patch

> ReservationAgent enhancements required to support recurring reservations in 
> the YARN ReservationSystem
> --
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch, YARN-5329.v4.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2017-10-04 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191960#comment-16191960
 ] 

Carlo Curino commented on YARN-5329:


I root caused the issue that was emerging in the 
{{TestRMWebserviceReservation}}, it was some off-by-one error in the 
{{PeriodicRLEResourceAllocation.getRangeOverlapping()}}. I have also 
added/tightened a couple of tests elsewhere to cover more directly the issues 
that emerged. All tests pass locally, let's see what YETUS has to say.

> ReservationAgent enhancements required to support recurring reservations in 
> the YARN ReservationSystem
> --
>
> Key: YARN-5329
> URL: https://issues.apache.org/jira/browse/YARN-5329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Carlo Curino
>Priority: Blocker
> Attachments: YARN-5329.v0.patch, YARN-5329.v1.patch, 
> YARN-5329.v2.patch, YARN-5329.v3.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes required 
> in ReservationAgent to accomplish it. Please refer to the design doc in the 
> parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7202) End-to-end UT for api-server

2017-10-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7202:

Attachment: YARN-7202.yarn-native-services.007.patch

- Remove the last unused import.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191946#comment-16191946
 ] 

Hadoop QA commented on YARN-6620:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 81 new + 480 unchanged - 24 fixed = 561 total (was 504) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
26s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 523 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 103 unchanged - 0 fixed = 104 total (was 103) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
48s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
27s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Updated] (YARN-2960) Add documentation for the YARN shared cache

2017-10-04 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-2960:
---
Attachment: YARN-2960-trunk-003.patch

Thanks [~mingma]! Attached is a v3 patch. I put all of the configs in a 
markdown table. I did leave the config and setup sub-sections separate within 
the administration section. I added a comment in the setup to reference the 
configs in the following section. I mainly wanted to keep the setup steps the 
minimum amount of setup, versus the config section which is a reference for all 
configuration parameters that are available. Let me know if there is anything 
else. Thanks again!

> Add documentation for the YARN shared cache
> ---
>
> Key: YARN-2960
> URL: https://issues.apache.org/jira/browse/YARN-2960
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-2960-trunk-001.patch, YARN-2960-trunk-002.patch, 
> YARN-2960-trunk-003.patch
>
>
> Add documentation around the architecture, api's and administration of the 
> YARN shared cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7245) In Cap Sched UI, Max AM Resource column in Active Users Info section should be per-user

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191940#comment-16191940
 ] 

Hadoop QA commented on YARN-7245:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 47m 48s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890408/YARN-7245.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 823c93088b4e 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17773/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17773/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191892#comment-16191892
 ] 

Hadoop QA commented on YARN-7289:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7289 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890403/YARN-7289.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5cc0fe79c55d 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17772/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17772/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-7216) Missing ability to list configuration vs status

2017-10-04 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191886#comment-16191886
 ] 

Eric Yang commented on YARN-7216:
-

The current patch depends on YARN-7202 changes.

> Missing ability to list configuration vs status
> ---
>
> Key: YARN-7216
> URL: https://issues.apache.org/jira/browse/YARN-7216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7216.yarn-native-services.001.patch
>
>
> API Server has /ws/v1/services/{service_name}.  This REST end point returns 
> Services object which contains both configuration and status.  When status or 
> macro based parameters changed in Services object, it can confuse UI code to 
> making configuration changes.  The suggestion is to preserve a copy of 
> configuration object independent of status object.  This gives UI ability to 
> change services configuration and update configuration.
> Similar to Ambari, it might provide better information if we have the 
> following separated REST end points:
> {code}
>  /ws/v1/services/[service_name]/spec
>  /ws/v1/services/[service_name]/status
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7216) Missing ability to list configuration vs status

2017-10-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7216:

Attachment: YARN-7216.yarn-native-services.001.patch

- Implemented SOLR as storage for Yarnfile.
- Added REST API to list configuration from SOLR.

To enable ability to store Yarnfile on SOLR, in yarn-site.xml, add the 
following properties:

{code}
  
yarn.api-service.solr.storage.enabled
true

  Flag to enable YARN Services for storing service configuration
  on solr.

  

  
yarn.api-service.solr.url
http://localhost:8983/solr/yarn

  URL to Solr server for storing YARN Service config.

  
{code}

> Missing ability to list configuration vs status
> ---
>
> Key: YARN-7216
> URL: https://issues.apache.org/jira/browse/YARN-7216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7216.yarn-native-services.001.patch
>
>
> API Server has /ws/v1/services/{service_name}.  This REST end point returns 
> Services object which contains both configuration and status.  When status or 
> macro based parameters changed in Services object, it can confuse UI code to 
> making configuration changes.  The suggestion is to preserve a copy of 
> configuration object independent of status object.  This gives UI ability to 
> change services configuration and update configuration.
> Similar to Ambari, it might provide better information if we have the 
> following separated REST end points:
> {code}
>  /ws/v1/services/[service_name]/spec
>  /ws/v1/services/[service_name]/status
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7285) ContainerExecutor always launches with priorities due to yarn-default property

2017-10-04 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191855#comment-16191855
 ] 

Naganarasimha G R commented on YARN-7285:
-

[~jlowe], I think we can remove it as its a simple configuration and hence can 
be left unstated !

> ContainerExecutor always launches with priorities due to yarn-default property
> --
>
> Key: YARN-7285
> URL: https://issues.apache.org/jira/browse/YARN-7285
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Minor
> Attachments: YARN-7285.001.patch
>
>
> ContainerExecutor will launch containers with a specified priority if a 
> priority adjustment is specified, otherwise with the OS default priority if 
> it is unspecified.  YARN-3069 added 
> yarn.nodemanager.container-executor.os.sched.priority.adjustment to 
> yarn-default.xml, so it is always specified even if the user did not 
> explicitly set it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7245) In Cap Sched UI, Max AM Resource column in Active Users Info section should be per-user

2017-10-04 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7245:
-
Attachment: Max AM Resource Per User -- Fixed.png

I attached {{YARN-7245.001.patch}} to address this.

I also attached a screenshot to show that the value {{Max AM Resource}} column 
matches the value in the {{Max Application Master Resources Per User}} field.

> In Cap Sched UI, Max AM Resource column in Active Users Info section should 
> be per-user
> ---
>
> Key: YARN-7245
> URL: https://issues.apache.org/jira/browse/YARN-7245
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: CapSched UI Showing Inaccurate Per User Max AM 
> Resource.png, Max AM Resource Per User -- Fixed.png, YARN-7245.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7286) Add support for docker to have no capabilities

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191837#comment-16191837
 ] 

Hadoop QA commented on YARN-7286:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  7s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
15s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7286 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890402/YARN-7286.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux db2778fd7cf3 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17771/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17771/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 

[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191835#comment-16191835
 ] 

Hadoop QA commented on YARN-7202:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
38s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
18s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 1 new + 7 unchanged - 
3 fixed = 8 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (YARN-6747) TestFSAppStarvation.testPreemptionEnable fails intermittently

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191805#comment-16191805
 ] 

Hadoop QA commented on YARN-6747:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6747 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890391/YARN-6747.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3d94c98886f9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17769/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17769/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Updated] (YARN-7224) Support GPU isolation for docker container

2017-10-04 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7224:
-
Attachment: YARN-7224.001.patch

Attached ver.1 patch on top of YARN-6620. Please feel free to share your 
thoughts! 

> Support GPU isolation for docker container
> --
>
> Key: YARN-7224
> URL: https://issues.apache.org/jira/browse/YARN-7224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7224.001.patch
>
>
> YARN-6620 added support of GPU isolation in NM side, which only supports 
> non-docker containers. We need to add support to help docker containers 
> launched by YARN can utilize GPUs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7224) Support GPU isolation for docker container

2017-10-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191803#comment-16191803
 ] 

Wangda Tan commented on YARN-7224:
--

YARN-6620 added support of GPU isolation in NM side, which only supports 
non-docker containers. We need to add support to help docker containers 
launched by YARN can utilize GPUs.

Currently, YARN launches docker containers inside cgroups. After 
YARN-6852/YARN-6620, we have finished cgroup-based GPU isolation logics. This 
patch is to address issues when docker container is being used.

There're several issues:
1. GPU driver and nvidia libraries: If GPU drivers and NV libraries are 
pre-packaged inside docker image, it could conflict to driver and 
nvidia-libraries installed on Host OS. An alternative solution is to detect 
Host OS's installed drivers and devices, mount it when launch docker container. 
Please refer to \[1\] for more details. 

2. Image detection: 
>From \[2\], the challenge is: 
bq. Mounting user-level driver libraries and device files clobbers the 
environment of the container, it should be done only when the container is 
running a GPU application. The challenge here is to determine if a given image 
will be using the GPU or not. We should also prevent launching containers based 
on a Docker image that is incompatible with the host NVIDIA driver version, you 
can find more details on this wiki page.

3. GPU isolation:
We have already done this in YARN-6852/YARN-6620. 

*Proposal:*
My plan is to use nvidia-docker-plugin \[3\] to address #1, this is the same 
solution used by K8S \[4\]. #2 could be addressed in a separate JIRA.

We won't ship nvidia-docker-plugin with out releases and we require cluster 
admin to preinstall nvidia-docker-plugin to use GPU+docker support on YARN. 
"nvidia-docker" is a wrapper of docker binary which can address #3 as well, 
however "nvidia-docker" doesn't provide same semantics of docker, and it needs 
to setup additional environments such as PATH/LD_LIBRARY_PATH to use it. To 
avoid introducing additional issues, we plan to use nvidia-docker-plugin + 
docker binary approach.

\[1\] https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-driver
\[2\] https://github.com/NVIDIA/nvidia-docker/wiki/Image-inspection
\[3\] https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker-plugin
\[4\] https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/

> Support GPU isolation for docker container
> --
>
> Key: YARN-7224
> URL: https://issues.apache.org/jira/browse/YARN-7224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> YARN-6620 added support of GPU isolation in NM side, which only supports 
> non-docker containers. We need to add support to help docker containers 
> launched by YARN can utilize GPUs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7245) In Cap Sched UI, Max AM Resource column in Active Users Info section should be per-user

2017-10-04 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7245:
-
Attachment: YARN-7245.001.patch

> In Cap Sched UI, Max AM Resource column in Active Users Info section should 
> be per-user
> ---
>
> Key: YARN-7245
> URL: https://issues.apache.org/jira/browse/YARN-7245
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: CapSched UI Showing Inaccurate Per User Max AM 
> Resource.png, YARN-7245.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7246) Fix the default docker binary path

2017-10-04 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191763#comment-16191763
 ] 

Eric Badger commented on YARN-7246:
---

{noformat}
+static char* USER_DOCKER_BINARY_PATH = "/bin/docker";
{noformat}
A little nitpicky, but this var in the test should also be a const char*. Sorry 
for not seeing this on the previous review. Other than that, the patch looks 
good to me. 

> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Blocker
> Attachments: YARN-7246-branch-2.8.2.001.patch, 
> YARN-7246-branch-2.8.2.002.patch, YARN-7246-branch-2.8.2.003.patch, 
> YARN-7246-branch-2.8.2.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7285) ContainerExecutor always launches with priorities due to yarn-default property

2017-10-04 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191762#comment-16191762
 ] 

Jason Lowe commented on YARN-7285:
--

I thought it best to leave an example value there so it's clear what format the 
property value is expected to be.  I can remove it if you feel it's better left 
unstated.

> ContainerExecutor always launches with priorities due to yarn-default property
> --
>
> Key: YARN-7285
> URL: https://issues.apache.org/jira/browse/YARN-7285
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Minor
> Attachments: YARN-7285.001.patch
>
>
> ContainerExecutor will launch containers with a specified priority if a 
> priority adjustment is specified, otherwise with the OS default priority if 
> it is unspecified.  YARN-3069 added 
> yarn.nodemanager.container-executor.os.sched.priority.adjustment to 
> yarn-default.xml, so it is always specified even if the user did not 
> explicitly set it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7224) Support GPU isolation for docker container

2017-10-04 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7224:
-
Description: YARN-6620 added support of GPU isolation in NM side, which 
only supports non-docker containers. We need to add support to help docker 
containers launched by YARN can utilize GPUs.

> Support GPU isolation for docker container
> --
>
> Key: YARN-7224
> URL: https://issues.apache.org/jira/browse/YARN-7224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> YARN-6620 added support of GPU isolation in NM side, which only supports 
> non-docker containers. We need to add support to help docker containers 
> launched by YARN can utilize GPUs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-10-04 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6620:
-
Attachment: YARN-6620.012.patch

Attached ver.012 patch, [~sunilg] could you please help check the updated patch?

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6620.001.patch, YARN-6620.002.patch, 
> YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch, 
> YARN-6620.006-WIP.patch, YARN-6620.007.patch, YARN-6620.008.patch, 
> YARN-6620.009.patch, YARN-6620.010.patch, YARN-6620.011.patch, 
> YARN-6620.012.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-04 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7289:
-
Attachment: YARN-7289.000.patch

> TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
> ---
>
> Key: YARN-7289
> URL: https://issues.apache.org/jira/browse/YARN-7289
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7289.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6620) [YARN-6223] NM Java side code changes to support isolate GPU devices by using CGroups

2017-10-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191735#comment-16191735
 ] 

Wangda Tan commented on YARN-6620:
--

Thanks [~sunilg], all good points.

bq. ResourceInformation.GPU_URI. I think this need not have to be hard coded. 
Other than memory-mb and vcores, all other resource names could be pulled from 
resource-types.xml. So is it fine to have the new resource names could be 
pulled form ResourceUtils itself.
I personally tend to make all first class supported resource type name to 
hardcoded. Part of the reason is, on NM side, many component needs to 
understand the question "what resource type name associated with the resource 
plugin". If we don't have a hard coded resource type for GPU, we have to define 
a "gpu_resource_type_name=..." in yarn-site.xml, which I want to avoid and I 
don't see any benefit of doing this. I don't expect any admin use "yarn.io" 
namespace to define non-first class supported resource types.

bq. In NM#serviceInit, ResourcePluginManager is created always. So do we need 
to have a null check in other places?
We have lots of UTs use a mocked NMContext which has null ptr of RPM. I found 
many unit test failures without this, if you really think we should update 
this, I can update UTs as well.

bq. In GPU#preStart(Container container), if requested container doesnt have 
any GPU demand, do we need to proceed further?
Yes we still need to proceed. We need to blacklist for all GPUs if a container 
doesn't have GPU request. 

bq. In case if future, an affinity constraint is coming for a given GPU device 
for a container, I guess we need a lil more changes to GpuAllocation class. 
Could we have some dummy apis defined now so that, too much of redesign is not 
needed later.
I would prefer to delay this, one of the reason is, additional affinity 
constraint might be made by RM instead of NM. For example an app requests 4 
GPUs on the same host, each of host has 8 GPUs. Only RM has the global picture 
of which host has 4 closest GPUs. So far we don't have solution of this problem 
yet. Once we want to support such use cases, there're many other places need to 
be updated beyond GpuAllocation.

bq. In GPU#bootstrap, we are retuning null. Is it correct?
Yes, we have done cgroups mounting, so we don't need to do any other ops for 
bootstrap.

bq. In ResourcePluginManager, we could avoid synchronized and keep the map as 
ConcurrentHashMap ?
I personally prefer to continue this way: RPM as well as (almost) other NM 
components don't have performance issues now and in the future. Use 
synchronized lock can enforce atomic semantics to all fields other than 
map-only. Unless we see some performance issues of these component, I prefer to 
use most simple and straightforward synchronized lock.

bq. In GpuDiscover#initialize, along with file existence, we could also check 
for file permission owner etc to ensure that its been accessed correctly.
We don't throw any exception during the initialize, it might be not useful to 
check this. I would prefer to let script executor to do this check when run 
commands.

bq. GpuDeviceInformationParser has too much xml dependency. Hadoop common has 
some xml parser, correct? could we use that ?
At the beginning I directly use XML parser, similar to how FairScheduler parses 
config files. Devaraj suggested to use JAXB and I think it makes sense because 
we can reuse all these objects once we want to support query GPU information 
from web UI.

Addressed 3/4/9.

> [YARN-6223] NM Java side code changes to support isolate GPU devices by using 
> CGroups
> -
>
> Key: YARN-6620
> URL: https://issues.apache.org/jira/browse/YARN-6620
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-6620.001.patch, YARN-6620.002.patch, 
> YARN-6620.003.patch, YARN-6620.004.patch, YARN-6620.005.patch, 
> YARN-6620.006-WIP.patch, YARN-6620.007.patch, YARN-6620.008.patch, 
> YARN-6620.009.patch, YARN-6620.010.patch, YARN-6620.011.patch
>
>
> This JIRA plan to add support of:
> 1) GPU configuration for NodeManagers
> 2) Isolation in CGroups. (Java side).
> 3) NM restart and recovery allocated GPU devices



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7286) Add support for docker to have no capabilities

2017-10-04 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-7286:
--
Attachment: YARN-7286.002.patch

Added a unit test

> Add support for docker to have no capabilities
> --
>
> Key: YARN-7286
> URL: https://issues.apache.org/jira/browse/YARN-7286
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-7286.001.patch, YARN-7286.002.patch
>
>
> Support for controlling capabilities was introduced in YARN-4258. However, it 
> does not allow for the capabilities list to be NULL, since {{getStrings()}} 
> will treat an empty value the same as it treats an unset property. So, a NULL 
> list will actually give the default capabilities list.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7289) TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out

2017-10-04 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-7289:


 Summary: 
TestApplicationLifetimeMonitor.testApplicationLifetimeMonitor times out
 Key: YARN-7289
 URL: https://issues.apache.org/jira/browse/YARN-7289
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7246) Fix the default docker binary path

2017-10-04 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191708#comment-16191708
 ] 

Shane Kumpf commented on YARN-7246:
---

[~ebadger] - thank you for the reviews! I've attached a new patch to address 
your comments. The test failures appear to be unrelated. Let me know if you 
have additional suggestions.

> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Blocker
> Attachments: YARN-7246-branch-2.8.2.001.patch, 
> YARN-7246-branch-2.8.2.002.patch, YARN-7246-branch-2.8.2.003.patch, 
> YARN-7246-branch-2.8.2.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7258) Add Node and Rack Hints to Opportunistic Scheduler

2017-10-04 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191701#comment-16191701
 ] 

Arun Suresh commented on YARN-7258:
---

Thanks for the patch [~kartheek]
Some comments:
* It looks generally good. We might need to add some more tests - to 
differentiate between when we have requests with numcontainers > 1 and those 
with numcontainers == 1.
* Also, looks like we might hit a Concurrent modification exception, when we 
remove the scheduler keys from the outstanding opportunistic requests.

> Add Node and Rack Hints to Opportunistic Scheduler
> --
>
> Key: YARN-7258
> URL: https://issues.apache.org/jira/browse/YARN-7258
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Attachments: YARN-7258.001.patch
>
>
> Currently, the Opportunistic Scheduler ignores the node and rack information 
> and allocates strictly on the least loaded node (based on queue length) at 
> the time it received the request. This JIRA is to track changes needed to 
> allow the OpportunisticContainerAllocator to take the node/rack name as hints.
> The flow would be:
> # If requested node found in the top K leastLoaded nodes, allocate on that 
> node
> # Else, allocate on least loaded node on the same rack from the top K least 
> Loaded nodes.
> # Else, allocate on least loaded node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7286) Add support for docker to have no capabilities

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191616#comment-16191616
 ] 

Hadoop QA commented on YARN-7286:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
6s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7286 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890385/YARN-7286.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b9ce8ddda929 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17768/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17768/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for docker to have no capabilities
> 

[jira] [Commented] (YARN-7285) ContainerExecutor always launches with priorities due to yarn-default property

2017-10-04 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191591#comment-16191591
 ] 

Naganarasimha G R commented on YARN-7285:
-

Hi [~jlowe], Overall patch LGTM, just that  do we need to comment the value or 
shall we remove it all together ?

> ContainerExecutor always launches with priorities due to yarn-default property
> --
>
> Key: YARN-7285
> URL: https://issues.apache.org/jira/browse/YARN-7285
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Minor
> Attachments: YARN-7285.001.patch
>
>
> ContainerExecutor will launch containers with a specified priority if a 
> priority adjustment is specified, otherwise with the OS default priority if 
> it is unspecified.  YARN-3069 added 
> yarn.nodemanager.container-executor.os.sched.priority.adjustment to 
> yarn-default.xml, so it is always specified even if the user did not 
> explicitly set it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6747) TestFSAppStarvation.testPreemptionEnable fails intermittently

2017-10-04 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6747:
-
Attachment: YARN-6747.000.patch

> TestFSAppStarvation.testPreemptionEnable fails intermittently
> -
>
> Key: YARN-6747
> URL: https://issues.apache.org/jira/browse/YARN-6747
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sunil G
>Assignee: Miklos Szegedi
> Attachments: YARN-6747.000.patch
>
>
> *Error Message*
> Apps re-added even before starvation delay passed expected:<4> but was:<3>
> *Stacktrace*
> java.lang.AssertionError: Apps re-added even before starvation delay passed 
> expected:<4> but was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation.testPreemptionEnabled(TestFSAppStarvation.java:117)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7285) ContainerExecutor always launches with priorities due to yarn-default property

2017-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191568#comment-16191568
 ] 

Hadoop QA commented on YARN-7285:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 43s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 39s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestTimelineClientV2Impl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7285 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890219/YARN-7285.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux e8d95145fd34 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 20e9ce3 |
| Default Java | 1.8.0_144 |
| 

[jira] [Updated] (YARN-7202) End-to-end UT for api-server

2017-10-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7202:

Attachment: YARN-7202.yarn-native-services.006.patch

Fixed checkstyle errors.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >