[jira] [Updated] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2017-09-28 Thread Tanuj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Nayak updated YARN-6871:
--
Attachment: YARN-6871-branch-2.v2.patch

> Add additional deSelects params in RMWebServices#getAppReport
> -
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6871.002.patch, YARN-6871.003.patch, 
> YARN-6871.004.patch, YARN-6871.005.patch, YARN-6871.006.patch, 
> YARN-6871.007.patch, YARN-6871.008.patch, YARN-6871.009.patch, 
> YARN-6871-branch-2.v1.patch, YARN-6871-branch-2.v2.patch, 
> YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7272) Enable timeline collector fault tolerance

2017-09-28 Thread Vrushali C (JIRA)
Vrushali C created YARN-7272:


 Summary: Enable timeline collector fault tolerance
 Key: YARN-7272
 URL: https://issues.apache.org/jira/browse/YARN-7272
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C



If a NM goes down and along with it the timeline collector aux service for a 
running yarn app, we would like that yarn app to re-establish connection with a 
new timeline collector. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7271) Add a yarn application cost calculation framework in TimelineService v2

2017-09-28 Thread Vrushali C (JIRA)
Vrushali C created YARN-7271:


 Summary: Add a yarn application cost calculation framework in 
TimelineService v2
 Key: YARN-7271
 URL: https://issues.apache.org/jira/browse/YARN-7271
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C



Timeline Service v2 captures information about a yarn application. From this 
info, we would like to calculate the "cost" of an yarn application. This would 
be rolled up to the flow level  as well (and user and queue level eventually).

We need a way to accept machine cost (TCO per day) and enable this calculation. 
This will help in chargeback for yarn apps. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185263#comment-16185263
 ] 

Hadoop QA commented on YARN-2162:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 346 unchanged - 6 fixed = 346 total (was 352) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | 

[jira] [Commented] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185261#comment-16185261
 ] 

Hadoop QA commented on YARN-7260:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | YARN-7260 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889289/YARN-7260-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux d9ca52df805e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / cba1891 |
| Default Java | 1.7.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17696/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17696/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
> Attachments: YARN-7260-branch-2.001.patch
>
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> 

[jira] [Commented] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-28 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185253#comment-16185253
 ] 

Rohith Sharma K S commented on YARN-7260:
-

+1 for patch! Jenkins didn't run because of findbug download and tar extraction 
failed. I have triggered manually to run again.

> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
> Attachments: YARN-7260-branch-2.001.patch
>
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185250#comment-16185250
 ] 

Arun Suresh commented on YARN-7248:
---

Thanks for the review and commit [~jlowe].

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185213#comment-16185213
 ] 

Hadoop QA commented on YARN-7269:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:
 The patch generated 12 new + 23 unchanged - 0 fixed = 35 total (was 23) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7269 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889623/YARN-7269.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 89980e2d99f5 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d3b1c63 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17695/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-web-proxy.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17695/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
|
| Console output | 

[jira] [Updated] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-09-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7269:
-
Attachment: YARN-7269.001.patch

Attached ver.1 patch, 

[~yufeigu] / [~rkanter], could you help with patch review? 

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
> Attachments: YARN-7269.001.patch
>
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-09-28 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185150#comment-16185150
 ] 

Yufei Gu commented on YARN-2162:


Thanks, [~templedf]. Uploaded v7 for your comments. I'll perform the 
performance test soon, and the unite test failures is unrelated to this patch. 
They are related to YARN-7207.

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch, YARN-2162.004.patch, YARN-2162.005.patch, 
> YARN-2162.006.patch, YARN-2162.007.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-09-28 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-2162:
---
Attachment: YARN-2162.007.patch

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch, YARN-2162.004.patch, YARN-2162.005.patch, 
> YARN-2162.006.patch, YARN-2162.007.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-09-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185140#comment-16185140
 ] 

Wangda Tan commented on YARN-7269:
--

Thanks [~ssath...@hortonworks.com] for helping reporting and reproducing the 
issue. 

After YARN-6225 it still doesn't work, the root cause is, inside: 
{{AmIpFilter#findRedirectUrl}} 

It calls
{code}
YarnConfiguration conf = new YarnConfiguration();
{code} 

So basically this requires yarn-site.xml exists in {{$CLASSPATH}} for mapreduce 
AM which may break rolling upgrade.

[~yufeigu] / [~rkanter], is this true for your internal deployment? 

To me, the correct way should set necessary parameters to {{FilterContainer}} 
by {{AmFilterInitializer}}. Which is loaded from JobConf:
{code}
MRAppMaster:  

JobConf conf = new JobConf(new YarnConfiguration());
conf.addResource(new Path(MRJobConfig.JOB_CONF_FILE));
MRWebAppUtil.initialize(conf);
{code}

Thanks [~vinodkv] / [~djp] for helping troubleshooting the issue.

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7270) Resource#getVirtualCores() does unsafe casting from long to int.

2017-09-28 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7270:
---
Description: 
Class {{Resource}} has three sub classes(FixedValueResource, 
LightWeightResource, and ResourcePBImpl). Only FixedValueResource handle 
long-to-int casting nicely. The other two didn't. This bug is introduced by 
resource type feature and causes several unit test failures. For example:
{code}
Error Message

expected:<> but was:<>
Stacktrace

java.lang.AssertionError: expected:<> but 
was:<>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt.testHeadroomWithBlackListedNodes(TestFSAppAttempt.java:325)
{code}

  was:
Class {{Resource}} has three sub classes(FixedValueResource, 
LightWeightResource, and ResourcePBImpl). Only FixedValueResource handle 
long-to-int casting nicely. The other two didn't. This cause several unit test 
failures. Like this one:
{code}
Error Message

expected:<> but was:<>
Stacktrace

java.lang.AssertionError: expected:<> but 
was:<>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt.testHeadroomWithBlackListedNodes(TestFSAppAttempt.java:325)
{code}


> Resource#getVirtualCores() does unsafe casting from long to int.
> 
>
> Key: YARN-7270
> URL: https://issues.apache.org/jira/browse/YARN-7270
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Class {{Resource}} has three sub classes(FixedValueResource, 
> LightWeightResource, and ResourcePBImpl). Only FixedValueResource handle 
> long-to-int casting nicely. The other two didn't. This bug is introduced by 
> resource type feature and causes several unit test failures. For example:
> {code}
> Error Message
> expected:<> but was:<>
> Stacktrace
> java.lang.AssertionError: expected:<> but 
> was:<>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt.testHeadroomWithBlackListedNodes(TestFSAppAttempt.java:325)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7270) Resource#getVirtualCores() does unsafe casting from long to int.

2017-09-28 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-7270:
--

 Summary: Resource#getVirtualCores() does unsafe casting from long 
to int.
 Key: YARN-7270
 URL: https://issues.apache.org/jira/browse/YARN-7270
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.1.0
Reporter: Yufei Gu
Assignee: Yufei Gu


Class {{Resource}} has three sub classes(FixedValueResource, 
LightWeightResource, and ResourcePBImpl). Only FixedValueResource handle 
long-to-int casting nicely. The other two didn't. This cause several unit test 
failures. Like this one:
{code}
Error Message

expected:<> but was:<>
Stacktrace

java.lang.AssertionError: expected:<> but 
was:<>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt.testHeadroomWithBlackListedNodes(TestFSAppAttempt.java:325)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185134#comment-16185134
 ] 

Hudson commented on YARN-6623:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12994 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12994/])
YARN-6623. Add support to turn off launching privileged containers in (wangda: 
rev d3b1c6319546706c41a2011ead6c3fe208883200)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerStopCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRmCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerRunCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test-string-utils.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerLoadCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerInspectCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.h
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/resources/test/test-configurations/docker-container-executor.cfg
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerRmCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerLoadCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/string-utils.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
* (edit) 

[jira] [Assigned] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-09-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-7269:


Assignee: Wangda Tan  (was: Tan, Wangda)

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Critical
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185131#comment-16185131
 ] 

Wangda Tan commented on YARN-6623:
--

[~andrew.wang], I just saw your email for branch-3.0.0-beta1 cut, is it still 
open for new commits? Can we merge this to branch-3.0.0-beta1? 

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch, YARN-6623.011.patch, 
> YARN-6623.012.patch, YARN-6623.013.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-09-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-7269:


Assignee: Tan, Wangda  (was: Wangda Tan)

> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications
> ---
>
> Key: YARN-7269
> URL: https://issues.apache.org/jira/browse/YARN-7269
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Critical
>
> Tracking URL in the app state does not get redirected to ApplicationMaster 
> for Running applications. It gives following exception
> {code}
>  org.mortbay.log: /ws/v1/mapreduce/info
> javax.servlet.ServletException: Could not determine the proxy server for 
> redirection
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
>   at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>   at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>   at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>   at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>   at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>   at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>   at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>   at org.mortbay.jetty.Server.handle(Server.java:326)
>   at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>   at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>   at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>   at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>   at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>   at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>   at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-28 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185129#comment-16185129
 ] 

Wangda Tan commented on YARN-6623:
--

Discussed with [~eyang] / [~vvasudev] / [~shaneku...@gmail.com], [~eyang] 
mentioned that we can move the whitelist usability issue to a separate JIRA to 
unblock this one.

So I just pushed to trunk/branch-3.0, it has some conflicts to branch-2.

Thanks to [~vvasudev] for the hard work and thanks 
[~miklos.szeg...@cloudera.com] / [~ebadger] / [~shaneku...@gmail.com] / 
[~eyang] / [~templedf] / [~djp] / [~andrew.wang] for reviews and suggestions!

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch, YARN-6623.011.patch, 
> YARN-6623.012.patch, YARN-6623.013.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7269) Tracking URL in the app state does not get redirected to ApplicationMaster for Running applications

2017-09-28 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-7269:


 Summary: Tracking URL in the app state does not get redirected to 
ApplicationMaster for Running applications
 Key: YARN-7269
 URL: https://issues.apache.org/jira/browse/YARN-7269
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish
Assignee: Tan, Wangda
Priority: Critical


Tracking URL in the app state does not get redirected to ApplicationMaster for 
Running applications. It gives following exception
{code}
 org.mortbay.log: /ws/v1/mapreduce/info
javax.servlet.ServletException: Could not determine the proxy server for 
redirection
at 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:199)
at 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:141)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7259) add rolling policy to LogAggregationIndexedFileController

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185127#comment-16185127
 ] 

Hadoop QA commented on YARN-7259:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 6 new + 
8 unchanged - 0 fixed = 14 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 52s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
44s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7259 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889609/YARN-7259.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b841727fe6d7 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c114da5 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17693/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17693/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17693/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Updated] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6623:
-
Fix Version/s: 3.0.0-beta1

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch, YARN-6623.011.patch, 
> YARN-6623.012.patch, YARN-6623.013.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-28 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6623:
-
Target Version/s: 2.9.0, 3.0.0  (was: 2.9.0, 2.8.2, 3.0.0)

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch, YARN-6623.011.patch, 
> YARN-6623.012.patch, YARN-6623.013.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-28 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184892#comment-16184892
 ] 

Ray Chiang edited comment on YARN-7260 at 9/28/17 11:13 PM:


Just as an FYI, when you run the test, maven does point you at the 
surefire-reports directory (as does the dev-support/verify-xml.sh script).  You 
can look at the resulting 
org.apache.hadoop.yarn.conf.TestYarnConfigurationFields-output.txt file for the 
full output of the unit tests.


was (Author: rchiang):
Just as an FYI, when you run the test, maven does point you at the 
surefire-reports directory.  You can look at the resulting 
org.apache.hadoop.yarn.conf.TestYarnConfigurationFields-output.txt file for the 
full output of the unit tests.

> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
> Attachments: YARN-7260-branch-2.001.patch
>
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7268) testCompareXmlAgainstConfigurationClass fails due to 1 missing property from yarn-default

2017-09-28 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185080#comment-16185080
 ] 

Ray Chiang commented on YARN-7268:
--

What branch are you running in?  I can't duplicate this error, and in both 
branch-2 and trunk, I see this line in TestYarnConfigurationFields.java:

xmlPrefixToSkipCompare.add(
"yarn.log-aggregation.file-controller.TFile.class");


> testCompareXmlAgainstConfigurationClass fails due to 1 missing property from 
> yarn-default
> -
>
> Key: YARN-7268
> URL: https://issues.apache.org/jira/browse/YARN-7268
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>
> {code}
> Error Message
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
> Stacktrace
> java.lang.AssertionError: yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:414)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Standard Output
> File yarn-default.xml (253 properties)
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   yarn.log-aggregation.file-controller.TFile.class
> ={code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7250) Update Shared cache client api to use URLs

2017-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185066#comment-16185066
 ] 

Hudson commented on YARN-7250:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12993 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12993/])
YARN-7250. Update Shared cache client api to use URLs. (ctrezzo: rev 
c114da5e64d14b1d9e614081c4171ea0391cb1aa)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/SharedCacheClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/SharedCacheClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestSharedCacheClientImpl.java


> Update Shared cache client api to use URLs
> --
>
> Key: YARN-7250
> URL: https://issues.apache.org/jira/browse/YARN-7250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7250-trunk-001.patch
>
>
> We should make the SharedCacheClient use api more consistent with other YARN 
> api methods. We can do this by doing two things:
> # Update the SharedCacheClient#use api so that it returns a URL instead of a 
> Path. Currently yarn developers have to convert the path to a URL when 
> creating a LocalResources. It would be much smoother if they could just use a 
> URL passed to them by the shared cache client.
> # Remove the portion of the client that deals with fragments as this is not 
> consistent with the rest of YARN. This functionality is bleeding in from the 
> MapReduce layer, which uses fragments to keep track of destination file 
> names. YARN's api does not use fragments. Instead  the ContainerLaunchContext 
> expects a Map localResources, where the strings are 
> the destination file names. We should let the YARN application handle 
> destination file names however it wants instead of pushing this into the 
> shared cache api. Additionally, fragments are a clunky way to handle this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6409) RM does not blacklist node for AM launch failures

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185063#comment-16185063
 ] 

Hadoop QA commented on YARN-6409:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 232 unchanged - 0 fixed = 233 total (was 232) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  1s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12868160/YARN-6409.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f8fdb0b59768 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ca669f9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17691/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17691/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  

[jira] [Commented] (YARN-6626) Embed REST API service into RM

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185054#comment-16185054
 ] 

Hadoop QA commented on YARN-6626:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
52s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
yarn-native-services has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 243 unchanged - 3 fixed = 243 total (was 246) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 22s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Updated] (YARN-7259) add rolling policy to LogAggregationIndexedFileController

2017-09-28 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-7259:

Attachment: YARN-7259.2.patch

> add rolling policy to LogAggregationIndexedFileController
> -
>
> Key: YARN-7259
> URL: https://issues.apache.org/jira/browse/YARN-7259
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-7259.1.patch, YARN-7259.2.patch
>
>
> We would roll over the log files based on the size. It only happens when the 
> partial log aggregation is enabled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7268) testCompareXmlAgainstConfigurationClass fails due to 1 missing property from yarn-default

2017-09-28 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185039#comment-16185039
 ] 

Vinod Kumar Vavilapalli commented on YARN-7268:
---

If this configuration {{yarn.log-aggregation.file-controller.TFile.class}} is 
not expected to be configurable by admin, it should be hardcoded and not be a 
config at all!

> testCompareXmlAgainstConfigurationClass fails due to 1 missing property from 
> yarn-default
> -
>
> Key: YARN-7268
> URL: https://issues.apache.org/jira/browse/YARN-7268
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Yesha Vora
>
> {code}
> Error Message
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
> Stacktrace
> java.lang.AssertionError: yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:414)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Standard Output
> File yarn-default.xml (253 properties)
> yarn-default.xml has 1 properties missing in  class 
> org.apache.hadoop.yarn.conf.YarnConfiguration
>   yarn.log-aggregation.file-controller.TFile.class
> ={code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4435) Add RM Delegation Token DtFetcher Implementation for DtUtil

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185013#comment-16185013
 ] 

Hadoop QA commented on YARN-4435:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client 
generated 2 new + 150 unchanged - 0 fixed = 152 total (was 150) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
8s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-4435 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12840442/YARN-4435-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 5635788e89c9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ca669f9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17688/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
| javadoc | 

[jira] [Commented] (YARN-7084) TestSchedulingMonitor#testRMStarts fails sporadically

2017-09-28 Thread Min Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16185012#comment-16185012
 ] 

Min Shen commented on YARN-7084:


[~jlowe]

I think you change in the patch makes more sense.
The semantic of the original unit test was to verify that the monitor policy 
gets invoked after service starts.
Verification with a timeout is a better approach here for this test purpose.

> TestSchedulingMonitor#testRMStarts fails sporadically
> -
>
> Key: YARN-7084
> URL: https://issues.apache.org/jira/browse/YARN-7084
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7084.001.patch
>
>
> TestSchedulingMonitor has been failing sporadically in precommit builds.  
> Failures look like this:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.802 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor
> testRMStarts(org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor)
>   Time elapsed: 1.728 sec  <<< FAILURE!
> org.mockito.exceptions.verification.WantedButNotInvoked: 
> Wanted but not invoked:
> schedulingEditPolicy.editSchedule();
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58)
> However, there were other interactions with this mock:
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.(SchedulingMonitor.java:50)
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61)
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:62)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-09-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184996#comment-16184996
 ] 

Jian He commented on YARN-7202:
---

oh, ok, didn't know we can still re-open it. 

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7268) testCompareXmlAgainstConfigurationClass fails due to 1 missing property from yarn-default

2017-09-28 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-7268:


 Summary: testCompareXmlAgainstConfigurationClass fails due to 1 
missing property from yarn-default
 Key: YARN-7268
 URL: https://issues.apache.org/jira/browse/YARN-7268
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Yesha Vora


{code}
Error Message

yarn-default.xml has 1 properties missing in  class 
org.apache.hadoop.yarn.conf.YarnConfiguration
Stacktrace

java.lang.AssertionError: yarn-default.xml has 1 properties missing in  class 
org.apache.hadoop.yarn.conf.YarnConfiguration
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:414)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
Standard Output

File yarn-default.xml (253 properties)

yarn-default.xml has 1 properties missing in  class 
org.apache.hadoop.yarn.conf.YarnConfiguration

  yarn.log-aggregation.file-controller.TFile.class

={code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6385) Fix warnings caused by TestFileSystemApplicationHistoryStore

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184974#comment-16184974
 ] 

Hadoop QA commented on YARN-6385:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice:
 The patch generated 0 new + 5 unchanged - 2 fixed = 5 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-6385 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860278/YARN-6385.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9e45751943cf 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ca669f9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17686/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17686/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

[jira] [Commented] (YARN-5995) Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition performance

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184924#comment-16184924
 ] 

Hadoop QA commented on YARN-5995:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-5995 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5995 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12861538/YARN-5995.0004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17689/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition 
> performance
> ---
>
> Key: YARN-5995
> URL: https://issues.apache.org/jira/browse/YARN-5995
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: metrics, resourcemanager
>Affects Versions: 2.7.1
> Environment: CentOS7.2 Hadoop-2.7.1 
>Reporter: zhangyubiao
>Assignee: zhangyubiao
>  Labels: patch
> Attachments: YARN-5995.0001.patch, YARN-5995.0002.patch, 
> YARN-5995.0003.patch, YARN-5995.0004.patch, YARN-5995.patch
>
>
> Add RMStateStore metrics to monitor all RMStateStoreEventTypeTransition 
> performance



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184966#comment-16184966
 ] 

Hadoop QA commented on YARN-7226:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 130 unchanged - 3 fixed = 130 total (was 133) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
10s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | YARN-7226 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889586/YARN-7226.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f6d2adafebb7 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ca669f9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17685/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17685/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Whitelisted variables do not support delayed 

[jira] [Commented] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2017-09-28 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184953#comment-16184953
 ] 

Subru Krishnan commented on YARN-6871:
--

[~sunilg], we don't necessarily need it beta but will be good to have in GA. 
Thanks.

> Add additional deSelects params in RMWebServices#getAppReport
> -
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
> Attachments: YARN-6871.002.patch, YARN-6871.003.patch, 
> YARN-6871.004.patch, YARN-6871.005.patch, YARN-6871.006.patch, 
> YARN-6871.007.patch, YARN-6871.008.patch, YARN-6871.009.patch, 
> YARN-6871-branch-2.v1.patch, YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3232) Some application states are not necessarily exposed to users

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184950#comment-16184950
 ] 

Hadoop QA commented on YARN-3232:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-3232 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3232 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823005/YARN-3232.v2.01.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17692/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Some application states are not necessarily exposed to users
> 
>
> Key: YARN-3232
> URL: https://issues.apache.org/jira/browse/YARN-3232
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Jian He
>Assignee: Varun Saxena
> Attachments: YARN-3232.002.patch, YARN-3232.01.patch, 
> YARN-3232.02.patch, YARN-3232.v2.01.patch
>
>
> application NEW_SAVING and SUBMITTED states are not necessarily exposed to 
> users as they mostly internal to the system, transient and not user-facing. 
> We may deprecate these two states and remove them from the web UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7267) Container should also return container system status to Application Master

2017-09-28 Thread Yan (JIRA)
Yan created YARN-7267:
-

 Summary: Container should also return container system status to 
Application Master
 Key: YARN-7267
 URL: https://issues.apache.org/jira/browse/YARN-7267
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: api
Reporter: Yan
Priority: Minor


Currently the container exit status is in the form of ContainerExitStatus, 
where only 9 failure status codes are available. However there could be many 
system error codes as result of launching a container shell command, or a set 
of such commands. It's desirable that this system code be propagated to AM, in 
addition to the 9 status codes  that appear to result from YARN actions.

One use case is that a necessary application resource is absent on a node to 
launch a container command. A "file not found" status would have provided 
enough info for AM to take some corrective action before a retry attempt. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5683) Support specifying storage type for per-application local dirs

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184925#comment-16184925
 ] 

Hadoop QA commented on YARN-5683:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-5683 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5683 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832871/YARN-5683-3.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17690/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support specifying storage type for per-application local dirs
> --
>
> Key: YARN-5683
> URL: https://issues.apache.org/jira/browse/YARN-5683
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Tao Yang
>Assignee: Tao Yang
>  Labels: oct16-hard
> Attachments: flow_diagram_for_MapReduce-2.png, 
> flow_diagram_for_MapReduce.png, YARN-5683-1.patch, YARN-5683-2.patch, 
> YARN-5683-3.patch
>
>
> h3.  Introduction
> * Some applications of various frameworks (Flink, Spark and MapReduce etc) 
> using local storage (checkpoint, shuffle etc) might require high IO 
> performance. It's useful to allocate local directories to high performance 
> storage media for these applications on heterogeneous clusters.
> * YARN does not distinguish different storage types and hence applications 
> cannot selectively use storage media with different performance 
> characteristics. Adding awareness of storage media can allow YARN to make 
> better decisions about the placement of local directories.
> h3.  Approach
> * NodeManager will distinguish storage types for local directories.
> ** yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs configuration 
> should allow the cluster administrator to optionally specify the storage type 
> for each local directories. Example: 
> [SSD]/disk1/nm-local-dir,/disk2/nm-local-dir,/disk3/nm-local-dir (equals to 
> [SSD]/disk1/nm-local-dir,[DISK]/disk2/nm-local-dir,[DISK]/disk3/nm-local-dir)
> ** StorageType defines DISK/SSD storage types and takes DISK as the default 
> storage type. 
> ** StorageLocation separates storage type and directory path, used by 
> LocalDirAllocator to aware the types of local dirs, the default storage type 
> is DISK.
> ** getLocalPathForWrite method of LocalDirAllcator will prefer to choose the 
> local directory of the specified storage type, and will fallback to not care 
> storage type if the requirement can not be satisfied.
> ** Support for container related local/log directories by ContainerLaunch. 
> All application frameworks can set the environment variables 
> (LOCAL_STORAGE_TYPE and LOG_STORAGE_TYPE) to specified the desired storage 
> type of local/log directories, and choose to not launch container if fallback 
> through these environment variables (ENSURE_LOCAL_STORAGE_TYPE and 
> ENSURE_LOG_STORAGE_TYPE).
> * Allow specified storage type for various frameworks (Take MapReduce as an 
> example)
> ** Add new configurations should allow application administrator to 
> optionally specify the storage type of local/log directories and fallback 
> strategy (MapReduce configurations: mapreduce.job.local-storage-type, 
> mapreduce.job.log-storage-type, mapreduce.job.ensure-local-storage-type and 
> mapreduce.job.ensure-log-storage-type).
> ** Support for container work directories. Set the environment variables 
> includes LOCAL_STORAGE_TYPE and LOG_STORAGE_TYPE according to configurations 
> above for ContainerLaunchContext and ApplicationSubmissionContext. (MapReduce 
> should update YARNRunner and TaskAttemptImpl)
> ** Add storage type prefix for request path to support for other local 
> directories of frameworks (such as shuffle directories for MapReduce). 
> (MapReduce should update YarnOutputFiles, MROutputFiles and YarnChild to 
> support for output/work directories)
> ** Flow diagram for MapReduce framework
> !flow_diagram_for_MapReduce-2.png!
> h3.  Further Discussion
> * Scheduling : The requirement of storage type for local/log directories may 
> not be satisfied for a part of nodes on heterogeneous clusters. To achieve 
> global optimum, scheduler should aware and manage disk resources. 
> ** Approach-1: Based on node attributes (YARN-3409), Scheduler can allocate 
> containers 

[jira] [Updated] (YARN-6426) Compress ZK YARN keys to scale up (especially AppStateData

2017-09-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6426:
--
Target Version/s: 2.9.0, 2.8.3, 3.0.0  (was: 2.9.0, 3.0.0-beta1, 2.8.3)

> Compress ZK YARN keys to scale up (especially AppStateData
> --
>
> Key: YARN-6426
> URL: https://issues.apache.org/jira/browse/YARN-6426
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Roni Burd
>Assignee: Roni Burd
>  Labels: patch
> Attachments: zkcompression.patch
>
>
> ZK today stores the protobuf files uncompressed. This is not an issue except 
> that if a customer job has thousands of files, AppStateData will store the 
> user context as a string with multiple URLs and it is easy to get to 1MB or 
> more. 
> This can put unnecessary strain on ZK and make the process slow. 
> The proposal is to simply compress protobufs before sending them to ZK



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7084) TestSchedulingMonitor#testRMStarts fails sporadically

2017-09-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-7084:
--
Target Version/s: 2.9.0, 2.8.3, 2.7.5, 3.0.0  (was: 2.9.0, 3.0.0-beta1, 
2.8.3, 2.7.5)

> TestSchedulingMonitor#testRMStarts fails sporadically
> -
>
> Key: YARN-7084
> URL: https://issues.apache.org/jira/browse/YARN-7084
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7084.001.patch
>
>
> TestSchedulingMonitor has been failing sporadically in precommit builds.  
> Failures look like this:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.802 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor
> testRMStarts(org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor)
>   Time elapsed: 1.728 sec  <<< FAILURE!
> org.mockito.exceptions.verification.WantedButNotInvoked: 
> Wanted but not invoked:
> schedulingEditPolicy.editSchedule();
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58)
> However, there were other interactions with this mock:
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.(SchedulingMonitor.java:50)
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61)
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:62)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-09-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-7226:
--
Target Version/s: 2.9.0, 2.8.3, 3.0.0  (was: 2.9.0, 3.0.0-beta1, 2.8.3)

> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch, 
> YARN-7226.003.patch, YARN-7226.004.patch, YARN-7226.005.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-09-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184914#comment-16184914
 ] 

Eric Yang commented on YARN-7202:
-

[~jianhe] I am able to re-open HADOOP-9122.  I can move powermock related 
changes to that JIRA for community to review.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6296) ReservationId.compareTo ignores id when clustertimestamp is the same

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184903#comment-16184903
 ] 

Hadoop QA commented on YARN-6296:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-6296 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6296 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12876614/YARN-6296.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17687/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ReservationId.compareTo ignores id when clustertimestamp is the same
> 
>
> Key: YARN-6296
> URL: https://issues.apache.org/jira/browse/YARN-6296
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>  Labels: newbie
> Attachments: YARN-6296.001.patch
>
>
> Per YARN-6252
> {code:java}
>   public int compareTo(ReservationId other) {
> if (this.getClusterTimestamp() - other.getClusterTimestamp() == 0) {
>   return getId() > getId() ? 1 : getId() < getId() ? -1 : 0;
> } else {
> }
>   }
> {code}
> compares id with itself. It should be
> {code}
> return this.getId() > other.getId() ? 1 : (this.getId() < other.getId() ? -1 
> : 0);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7260) yarn.router.pipeline.cache-max-size is missing in yarn-default.xml

2017-09-28 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184892#comment-16184892
 ] 

Ray Chiang commented on YARN-7260:
--

Just as an FYI, when you run the test, maven does point you at the 
surefire-reports directory.  You can look at the resulting 
org.apache.hadoop.yarn.conf.TestYarnConfigurationFields-output.txt file for the 
full output of the unit tests.

> yarn.router.pipeline.cache-max-size is missing in yarn-default.xml
> --
>
> Key: YARN-7260
> URL: https://issues.apache.org/jira/browse/YARN-7260
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Rohith Sharma K S
>Assignee: Jason Lowe
> Attachments: YARN-7260-branch-2.001.patch
>
>
> In branch-2 TestYarnConfigurationFields fails
> {code}
> Running org.apache.hadoop.yarn.api.records.TestURL Tests run: 1, Failures: 0, 
> Errors: 0, Skipped: 0, Time elapsed: 0.278 sec - in 
> org.apache.hadoop.yarn.api.records.TestURL Running 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields Tests run: 4, 
> Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec <<< FAILURE! - in 
> org.apache.hadoop.yarn.conf.TestYarnConfigurationFields 
> testCompareXmlAgainstConfigurationClass(org.apache.hadoop.yarn.conf.TestYarnConfigurationFields)
>  Time elapsed: 0.296 sec <<< FAILURE! java.lang.AssertionError: 
> yarn-default.xml has 1 properties missing in class 
> org.apache.hadoop.yarn.conf.YarnConfiguration at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.hadoop.conf.TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass(TestConfigurationFieldsBase.java:588)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-09-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184881#comment-16184881
 ] 

Jian He commented on YARN-7202:
---

Once the issue is closed, it cannot be reopened. If the issue is only resolved, 
it can be re-opened.
Maybe you can make a comment there ? and open a new jira in 'hadoop-common' to 
express the intent?

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6962) Add support for updateContainers when allocating using FederationInterceptor

2017-09-28 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184874#comment-16184874
 ] 

Botong Huang commented on YARN-6962:


Thanks [~subru]!

> Add support for updateContainers when allocating using FederationInterceptor
> 
>
> Key: YARN-6962
> URL: https://issues.apache.org/jira/browse/YARN-6962
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-6962.v1.patch, YARN-6962.v2.patch
>
>
> Container update is introduced in YARN-5221. Federation Interceptor needs to 
> support it when splitting (merging) the allocate request (response).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-09-28 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184870#comment-16184870
 ] 

Eric Yang commented on YARN-7202:
-

[~jianhe] Sure, I will add to javadoc to indicate the purpose of test case.  I 
tried to reopen HADOOP-7537, but JIRA won't let me.  Can you reopen the issue?  
People didn't find a working combination of Mockito, Powermock and commons-io 
in the past.  The outstanding concerns had been addressed in open source in the 
past couple years.  Hence, I like to proposed to move forward with the tested 
version of Powermockito.  As we can see from the tests, powermock did not 
introduce unwanted behavior to Hadoop tests.

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7226) Whitelisted variables do not support delayed variable expansion

2017-09-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7226:
-
Attachment: YARN-7226.005.patch

Thanks for the review, Sidharta!

I added the interface to LinuxContainerRuntime since that was the lowest place 
that actually needed it, but I went ahead and moved it to ContainerRuntime in 
case other non-linux runtimes would need this.

Testing this was a bit tricky since we can't modify the system environment 
variables, so I needed to add a hook so tests can modify the list of NM 
environment variables.  I added a test to verify that when using the docker 
runtime NM whitelist variables that are not overridden by the container are not 
added to the environment.


> Whitelisted variables do not support delayed variable expansion
> ---
>
> Key: YARN-7226
> URL: https://issues.apache.org/jira/browse/YARN-7226
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha4
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-7226.001.patch, YARN-7226.002.patch, 
> YARN-7226.003.patch, YARN-7226.004.patch, YARN-7226.005.patch
>
>
> The nodemanager supports a configurable list of environment variables, via 
> yarn.nodemanager.env-whitelist, that will be propagated to the container's 
> environment unless those variables were specified in the container launch 
> context.  Unfortunately the handling of these whitelisted variables prevents 
> using delayed variable expansion.  For example, if a user shipped their own 
> version of hadoop with their job via the distributed cache and specified:
> {noformat}
> HADOOP_COMMON_HOME={{PWD}}/my-private-hadoop/
> {noformat}
>  as part of their job, the variable will be set as the *literal* string:
> {noformat}
> $PWD/my-private-hadoop/
> {noformat}
> rather than having $PWD expand to the container's current directory as it 
> does for any other, non-whitelisted variable being set to the same value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7009) TestNMClient.testNMClientNoCleanupOnStop is flaky by design

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184850#comment-16184850
 ] 

Hadoop QA commented on YARN-7009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} root: The patch generated 0 new + 278 unchanged - 2 
fixed = 278 total (was 280) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  4s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
19s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
10s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 53s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.sls.TestReservationSystemInvariants |
|   | hadoop.yarn.sls.TestSLSRunner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | 

[jira] [Updated] (YARN-6626) Embed REST API service into RM

2017-09-28 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-6626:

Attachment: YARN-6626.yarn-native-services.011.patch

Rebase patch to the current HEAD for yarn-native-services branch.

> Embed REST API service into RM
> --
>
> Key: YARN-6626
> URL: https://issues.apache.org/jira/browse/YARN-6626
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-6626.yarn-native-services.001.patch, 
> YARN-6626.yarn-native-services.002.patch, 
> YARN-6626.yarn-native-services.003.patch, 
> YARN-6626.yarn-native-services.004.patch, 
> YARN-6626.yarn-native-services.005.patch, 
> YARN-6626.yarn-native-services.006.patch, 
> YARN-6626.yarn-native-services.007.patch, 
> YARN-6626.yarn-native-services.008.patch, 
> YARN-6626.yarn-native-services.009.patch, 
> YARN-6626.yarn-native-services.010.patch, 
> YARN-6626.yarn-native-services.011.patch
>
>
> As of now the deployment model of the Native Services REST API service is 
> standalone. There are several cross-cutting solutions that can be inherited 
> for free (kerberos, HA, ACLs, trusted proxy support, etc.) by the REST API 
> service if it is embedded into the RM process. In fact we can expose the REST 
> API via the same port as RM UI (8088 default). The URI path 
> /services/v1/applications will distinguish the REST API calls from other RM 
> APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6962) Add support for updateContainers when allocating using FederationInterceptor

2017-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184810#comment-16184810
 ] 

Hudson commented on YARN-6962:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12992 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12992/])
YARN-6962. Add support for updateContainers when allocating using (subru: rev 
ca669f9f8bc7abe5b7d4648c589aa1756bd336d1)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java


> Add support for updateContainers when allocating using FederationInterceptor
> 
>
> Key: YARN-6962
> URL: https://issues.apache.org/jira/browse/YARN-6962
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-6962.v1.patch, YARN-6962.v2.patch
>
>
> Container update is introduced in YARN-5221. Federation Interceptor needs to 
> support it when splitting (merging) the allocate request (response).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-09-28 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184806#comment-16184806
 ] 

Varun Saxena commented on YARN-7190:


I haven't cleaned up findbugs-annotations and another unnecessary jar above. 
The result above was merely to show possible segregation scheme. These jars are 
brought in as a hbase dependency.

I think breaking it up into server and client jars should be done across 
modules, not just YARN. I was finding it weird to create a folder like 
{{share/hadoop/yarn/server}} and place only the YARN timelineservice specific 
jars there and not the RM,NM ones. And if we place RM, NM jars there it would 
break compatibility. So went with scheme above.

One suggestion from Vrushali during our weekly call was that we can place 
timelineservice jar in share/hadoop/yarn/timelineservice folder and 
timelineservice hbase implementation jars in 
share/hadoop/yarn/timelineservice-hbase folders. However, latter may make only 
sense if we can based on timeline writer/reader implementation exclude this 
from NM/RM/reader classpath. Not sure if this would be feasible.

The first suggestion sounds good to me. So now, I plan to create a folder 
{{share/hadoop/yarn/timelineservice}} and 
{{share/hadoop/yarn/timelineservice/lib}} to place timeline service specific 
jars and their dependencies.
Old jars would remain as it is in their respective locations.

Thoughts?

> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6962) Add support for updateContainers when allocating using FederationInterceptor

2017-09-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6962:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Add support for updateContainers when allocating using FederationInterceptor
> 
>
> Key: YARN-6962
> URL: https://issues.apache.org/jira/browse/YARN-6962
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-6962.v1.patch, YARN-6962.v2.patch
>
>
> Container update is introduced in YARN-5221. Federation Interceptor needs to 
> support it when splitting (merging) the allocate request (response).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7265) Hadoop Server Log Correlation

2017-09-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184738#comment-16184738
 ] 

Jason Lowe commented on YARN-7265:
--

Is this really appropriate to be built directly into Hadoop?  A sophisticated 
log analytics and log correlation system seems like something that would be 
better built on top of Hadoop as a separate project.  Hive, Pig, Oozie, HBase, 
Ambari, etc. are all important parts of the Hadoop stack, but there are good 
reasons why they are not built into Hadoop.  This seems like another thing that 
would be very useful as part of the Hadoop stack but not necessary to build in 
directly.


> Hadoop Server Log Correlation  
> ---
>
> Key: YARN-7265
> URL: https://issues.apache.org/jira/browse/YARN-7265
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: log-aggregation
>Reporter: Tanping Wang
>
> Hadoop has many server logs, yarn tasks logs, node manager logs, HDFS logs..  
>  There are also a lot of different ways can be used to expose the logs, build 
> relationship horizontally to correlate the logs or search the logs by 
> keyword. There is a need for a default and yet convenient  logging analytics 
> mechanism in Hadoop itself that at least covers all the server logs of 
> Hadoop.  This log analytics system can correlate the Hadoop server logs by 
> grouping them by various dimensions including application ID,  task ID, job 
> ID or node ID etc.   The raw logs with correlation can be easily accessed by 
> the application developer or cluster administrator via web page for managing 
> and debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6059) Update paused container state in the NM state store

2017-09-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-6059:
-
Fix Version/s: (was: 3.1.0)
   3.0.0

> Update paused container state in the NM state store
> ---
>
> Key: YARN-6059
> URL: https://issues.apache.org/jira/browse/YARN-6059
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-5216-YARN-6059.001.patch, 
> YARN-6059-YARN-5972.001.patch, YARN-6059-YARN-5972.002.patch, 
> YARN-6059-YARN-5972.003.patch, YARN-6059-YARN-5972.004.patch, 
> YARN-6059-YARN-5972.005.patch, YARN-6059-YARN-5972.006.patch, 
> YARN-6059-YARN-5972.007.patch, YARN-6059-YARN-5972.008.patch, 
> YARN-6059-YARN-5972.009.patch, YARN-6059-YARN-5972.010.patch, 
> YARN-6059-YARN-5972.011.patch, YARN-6059-YARN-5972.012.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7265) Hadoop Server Log Correlation

2017-09-28 Thread Tanping Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanping Wang updated YARN-7265:
---
Description: Hadoop has many server logs, yarn tasks logs, node manager 
logs, HDFS logs..   There are also a lot of different ways can be used to 
expose the logs, build relationship horizontally to correlate the logs or 
search the logs by keyword. There is a need for a default and yet convenient  
logging analytics mechanism in Hadoop itself that at least covers all the 
server logs of Hadoop.  This log analytics system can correlate the Hadoop 
server logs by grouping them by various dimensions including application ID,  
task ID, job ID or node ID etc.   The raw logs with correlation can be easily 
accessed by the application developer or cluster administrator via web page for 
managing and debugging.  (was: Hadoop has many server logs, yarn tasks logs, 
node manager logs, HDFS logs..   There are also a lot of different ways can be 
used to expose the logs, build relationship horizontally to correlate the logs 
or search the logs by keyword. There is a need for a default and yet convenient 
 logging analytics mechanism in Hadoop itself that at least covers all the 
server logs of Hadoop.  This log analytics system can correlate the Hadoop 
server logs by grouping them by various dimensions including task number, job 
number or node number etc.   The raw logs with correlation can be easily 
accessed by the application developer or cluster administrator via web page for 
managing and debugging.)

> Hadoop Server Log Correlation  
> ---
>
> Key: YARN-7265
> URL: https://issues.apache.org/jira/browse/YARN-7265
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: log-aggregation
>Reporter: Tanping Wang
>
> Hadoop has many server logs, yarn tasks logs, node manager logs, HDFS logs..  
>  There are also a lot of different ways can be used to expose the logs, build 
> relationship horizontally to correlate the logs or search the logs by 
> keyword. There is a need for a default and yet convenient  logging analytics 
> mechanism in Hadoop itself that at least covers all the server logs of 
> Hadoop.  This log analytics system can correlate the Hadoop server logs by 
> grouping them by various dimensions including application ID,  task ID, job 
> ID or node ID etc.   The raw logs with correlation can be easily accessed by 
> the application developer or cluster administrator via web page for managing 
> and debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7256) Giving Yarn Application the Option to Black Out Certain Nodes On the Fly

2017-09-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184723#comment-16184723
 ] 

Jason Lowe commented on YARN-7256:
--

Closing this as a duplicate of YARN-750 which allows applications to blacklist 
hosts as part of the allocation request.  Please reopen if this is actually 
requesting something different than what is implemented there.

> Giving Yarn Application the Option to Black Out Certain Nodes On the Fly
> 
>
> Key: YARN-7256
> URL: https://issues.apache.org/jira/browse/YARN-7256
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Tanping Wang
>Priority: Minor
>
> The application should have the option to block out certain "bad" nodes at 
> run time.  A use case is that if the tasks of an application fail on a 
> particular node multiple times, the application should be able to add the 
> "bad" node to the "blacklist" for this application.  As the result, no other 
> tasks of this application will be launched to the same node at run time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7256) Giving Yarn Application the Option to Black Out Certain Nodes On the Fly

2017-09-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved YARN-7256.
--
Resolution: Duplicate

> Giving Yarn Application the Option to Black Out Certain Nodes On the Fly
> 
>
> Key: YARN-7256
> URL: https://issues.apache.org/jira/browse/YARN-7256
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: applications
>Reporter: Tanping Wang
>Priority: Minor
>
> The application should have the option to block out certain "bad" nodes at 
> run time.  A use case is that if the tasks of an application fail on a 
> particular node multiple times, the application should be able to add the 
> "bad" node to the "blacklist" for this application.  As the result, no other 
> tasks of this application will be launched to the same node at run time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2017-09-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5216:
-
Fix Version/s: (was: 3.1.0)
   3.0.0

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-scheduling
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
>  Labels: oct16-hard
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN5216.001.patch, yarn5216.002.patch, 
> YARN-5216-YARN-5972.001.patch, YARN-5216-YARN-5972.002.patch, 
> YARN-5216-YARN-5972.003.patch, YARN-5216-YARN-5972.004.patch, 
> YARN-5216-YARN-5972.005.patch, YARN-5216-YARN-5972.006.patch, 
> YARN-5216-YARN-5972.007.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7240) Add more states and transitions to stabilize the NM Container state machine

2017-09-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7240:
-
Fix Version/s: (was: 3.1.0)
   3.0.0

> Add more states and transitions to stabilize the NM Container state machine
> ---
>
> Key: YARN-7240
> URL: https://issues.apache.org/jira/browse/YARN-7240
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7240.001.patch, YARN-7240.002.patch, 
> YARN-7240.002.patch
>
>
> There seem to be a few intermediate states that can be added to improve the 
> stability of the NM container state machine.
> For. eg:
> * The REINITIALIZING should probably be split into REINITIALIZING and 
> REINITIALIZING_AWAITING_KILL. 
> * Container updates are currently handled in the ContainerScheduler, but it 
> would probably be better to have it plumbed through the container state 
> machine as a new state, say UPDATING and a new container event.
> The plan is to add some extra tests too to try and test every transition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5292) NM Container lifecycle and state transitions to support for PAUSED container state.

2017-09-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5292:
-
Fix Version/s: (was: 3.1.0)
   3.0.0

> NM Container lifecycle and state transitions to support for PAUSED container 
> state.
> ---
>
> Key: YARN-5292
> URL: https://issues.apache.org/jira/browse/YARN-5292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-5292.001.patch, YARN-5292.002.patch, 
> YARN-5292.003.patch, YARN-5292.004.patch, YARN-5292.005.patch, 
> YARN-5292.006.patch
>
>
> This JIRA addresses the NM Container and state machine and lifecycle changes 
> needed  to support pausing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184711#comment-16184711
 ] 

Hudson commented on YARN-7248:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12991 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12991/])
YARN-7248. NM returns new SCHEDULED container status to older clients. (jlowe: 
rev 85d81ae58ec4361a944c84753a900460a0888b9b)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerState.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerSubState.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerStatusPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerState.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestEventFlow.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerStatus.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerShutdown.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java


> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6457) Allow custom SSL configuration to be supplied in WebApps

2017-09-28 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184700#comment-16184700
 ] 

Robert Kanter commented on YARN-6457:
-

[~vrozov], [~sanjaypujare] we were doing some testing and found that this 
change breaks a setup with HDFS HA + SSL + Hadoop Credstore.  In that setup, 
the RM will fail to startup with a stack trace like this:
{noformat}
Error starting ResourceManager
java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:444)
at 
org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:132)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:341)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:285)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:163)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3258)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3307)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3275)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:467)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.initFileSystem(JavaKeyStoreProvider.java:89)
at 
org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:49)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:41)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:73)
at 
org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2157)
at 
org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2095)
at 
org.apache.hadoop.yarn.webapp.util.WebAppUtils.getPassword(WebAppUtils.java:431)
at 
org.apache.hadoop.yarn.webapp.util.WebAppUtils.loadSslConfiguration(WebAppUtils.java:409)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:312)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:401)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1119)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1229)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1425)
Caused by: java.net.UnknownHostException: ns1
... 28 more
{noformat}
I did some digging, and the problem is that with HDFS HA, we have a logical 
name (i.e. "ns1") instead of an actual hostname.  So when the Credstore (i.e. 
{{Configuration.getPassword}}) tries to read the password, it needs to resolve 
the logical name into a hostname; however, that information is now missing 
because of this JIRA.  If I change it so that we do {{new Configuration()}} 
instead of {{new Configuration(false)}}, so we'll load hdfs-site (and others), 
and that fixes the problem.  

Why do we need to set {{loadDefaults}} to {{false}}?

> Allow custom SSL configuration to be supplied in WebApps
> 
>
> Key: YARN-6457
> URL: https://issues.apache.org/jira/browse/YARN-6457
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp, yarn
>Reporter: Sanjay M Pujare
>Assignee: Sanjay M Pujare
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6457.00.patch, YARN-6457.01.patch
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> Currently a custom SSL store cannot be passed on to WebApps which forces the 
> embedded web-server to use the default keystore set up in ssl-server.xml for 
> the whole Hadoop cluster. There are cases where the Hadoop app needs to use 
> its own/custom keystore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7096) Federation: routing ClientRM invocations transparently to multiple RMs (part 2 - getApps)

2017-09-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7096:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Federation: routing ClientRM invocations transparently to multiple RMs (part 
> 2 - getApps)
> -
>
> Key: YARN-7096
> URL: https://issues.apache.org/jira/browse/YARN-7096
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 2.9.0, 3.0.0-beta1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7097) Federation: routing ClientRM invocations transparently to multiple RMs (part 5 - getNode/getNodes/getMetrics)

2017-09-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7097:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Federation: routing ClientRM invocations transparently to multiple RMs (part 
> 5 - getNode/getNodes/getMetrics)
> -
>
> Key: YARN-7097
> URL: https://issues.apache.org/jira/browse/YARN-7097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 2.9.0, 3.0.0-beta1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184686#comment-16184686
 ] 

Jason Lowe commented on YARN-7248:
--

Thanks for updating the patch!

+1 lgtm.  Committing this.


> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7096) Federation: routing ClientRM invocations transparently to multiple RMs (part 2 - getApps)

2017-09-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7096:
-
Fix Version/s: 2.9.0

> Federation: routing ClientRM invocations transparently to multiple RMs (part 
> 2 - getApps)
> -
>
> Key: YARN-7096
> URL: https://issues.apache.org/jira/browse/YARN-7096
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 2.9.0, 3.0.0-beta1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-09-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6691:
-
Parent Issue: YARN-2915  (was: YARN-5597)

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6691-branch-2.v1.patch, 
> YARN-6691-YARN-2915.v1.patch, YARN-6691-YARN-2915.v2.patch, 
> YARN-6691-YARN-2915.v3.patch, YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7097) Federation: routing ClientRM invocations transparently to multiple RMs (part 5 - getNode/getNodes/getMetrics)

2017-09-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-7097:
-
Fix Version/s: 2.9.0

> Federation: routing ClientRM invocations transparently to multiple RMs (part 
> 5 - getNode/getNodes/getMetrics)
> -
>
> Key: YARN-7097
> URL: https://issues.apache.org/jira/browse/YARN-7097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 2.9.0, 3.0.0-beta1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-09-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6691:
-
Fix Version/s: 2.9.0

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6691-branch-2.v1.patch, 
> YARN-6691-YARN-2915.v1.patch, YARN-6691-YARN-2915.v2.patch, 
> YARN-6691-YARN-2915.v3.patch, YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-09-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184648#comment-16184648
 ] 

Jian He commented on YARN-7202:
---

could you also add the comments in your first comment of this jira here to each 
test which explained what each test is doing ?

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) End-to-end UT for api-server

2017-09-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184640#comment-16184640
 ] 

Jian He commented on YARN-7202:
---

bq.  If both mocked version of ServiceClient, and ApiServer are in agreement of 
the behavior. The ambiguity problem must be originated from ServiceClient. This 
verification confirms that unit test for ApiServer is done correctly, and 
problem resides in ServiceClient. 
It won't be easy to maintain the behavior to be consistent across 
ServiceClientForTest and ServiceClient, because most of the time people will 
not look at the UTs unless the UT failed. 

Any way, I agree we can do the more end-to-end test separately.
Reg the UT patch, for the changes in updateService, I saw you tried to 
Serialize all the operations and that introduced quite a few if/else 
conditions. To keep it simple, for start and stop, we can just return directly? 
it is expected to only do start or stop. 
And the " // flex a single component app" scenario code will be removed later.

The TestApiServer introduced powermock. Till now, Hadoop had not been using 
powermock, and is using Mockito. Some historical discussion HADOOP-7537, 
HADOOP-9122 happened for this. To keep it consistent with rest code base and 
less controversial, how about changing to use Mockito ? Or any part in the UT 
must rely on powermock?

> End-to-end UT for api-server
> 
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6691) Update YARN daemon startup/shutdown scripts to include Router service

2017-09-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184631#comment-16184631
 ] 

Arun Suresh commented on YARN-6691:
---

committed to branch-2. Thanks [~giovanni.fumarola]

> Update YARN daemon startup/shutdown scripts to include Router service
> -
>
> Key: YARN-6691
> URL: https://issues.apache.org/jira/browse/YARN-6691
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6691-branch-2.v1.patch, 
> YARN-6691-YARN-2915.v1.patch, YARN-6691-YARN-2915.v2.patch, 
> YARN-6691-YARN-2915.v3.patch, YARN-6691-YARN-2915.v4.patch
>
>
> YARN-5410 introduce a new YARN service, i.e. Router. This jira proposes to 
> update YARN daemon startup/shutdown scripts to include Router service.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184623#comment-16184623
 ] 

Hadoop QA commented on YARN-2497:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 6 new + 1001 unchanged - 29 fixed = 1007 total (was 1030) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit 

[jira] [Commented] (YARN-2037) Add restart support for Unmanaged AMs

2017-09-28 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184620#comment-16184620
 ] 

Subru Krishnan commented on YARN-2037:
--

[~botong], I am preparing to commit the patch when I realized something; we are 
adding work preserving restart for UAM and so should update the Javadocs 
accordingly:
* In {{ApplicationMasterProtocol}} to call out that for UnmanagedAM HA, we do 
allow re-register now, based on *ApplicationSubmissionContext#keepContainers* 
flag.
* In getter/setter for *keepContainers* in {{ApplicationSubmissionContext}} to 
say that UnmanagedAMs can now set it for work preserving restart through 
re-registration.

Thanks.

> Add restart support for Unmanaged AMs
> -
>
> Key: YARN-2037
> URL: https://issues.apache.org/jira/browse/YARN-2037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Botong Huang
> Attachments: YARN-2037.v1.patch
>
>
> It would be nice to allow Unmanaged AMs also to restart in a work-preserving 
> way. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184618#comment-16184618
 ] 

Arun Suresh commented on YARN-7248:
---

[~jlowe], do you think the latest patch is fine ? The test failures are 
unrelated.

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184617#comment-16184617
 ] 

Arun Suresh commented on YARN-7248:
---

Ok, pushed to the required patches to branch-3.0, given that beta1 branch is 
now cut. Thanks Andrew.


> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7266) Timeline Server event handler threads locked

2017-09-28 Thread Venkata Puneet Ravuri (JIRA)
Venkata Puneet Ravuri created YARN-7266:
---

 Summary: Timeline Server event handler threads locked
 Key: YARN-7266
 URL: https://issues.apache.org/jira/browse/YARN-7266
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Affects Versions: 2.7.3
Reporter: Venkata Puneet Ravuri


Event handlers for Timeline Server seem to take a lock while parsing HTTP 
headers of the request. This is causing all other threads to wait and slowing 
down the overall performance of Timeline server. We have resourcemanager 
metrics enabled to send to timeline server. Because of the high load on 
ResourceManager, the metrics to be sent are getting backlogged and in turn 
increasing heap footprint of Resource Manager (due to pending metrics).

This is the complete stack trace of a blocked thread on timeline server:-
"2079644967@qtp-1658980982-4560" #4632 daemon prio=5 os_prio=0 
tid=0x7f6ba490a000 nid=0x5eb waiting for monitor entry [0x7f6b9142c000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector.prepare(AccessorInjector.java:82)
- waiting to lock <0x0005c0621860> (a java.lang.Class for 
com.sun.xml.bind.v2.runtime.reflect.opt.AccessorInjector)
at 
com.sun.xml.bind.v2.runtime.reflect.opt.OptimizedAccessorFactory.get(OptimizedAccessorFactory.java:168)
at 
com.sun.xml.bind.v2.runtime.reflect.Accessor$FieldReflection.optimize(Accessor.java:282)
at 
com.sun.xml.bind.v2.runtime.property.SingleElementNodeProperty.(SingleElementNodeProperty.java:94)
at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown 
Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at 
com.sun.xml.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:128)
at 
com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:183)
at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:532)
at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:551)
at 
com.sun.xml.bind.v2.runtime.property.ArrayElementProperty.(ArrayElementProperty.java:112)
at 
com.sun.xml.bind.v2.runtime.property.ArrayElementNodeProperty.(ArrayElementNodeProperty.java:62)
at sun.reflect.GeneratedConstructorAccessor19.newInstance(Unknown 
Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown 
Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at 
com.sun.xml.bind.v2.runtime.property.PropertyFactory.create(PropertyFactory.java:128)
at 
com.sun.xml.bind.v2.runtime.ClassBeanInfoImpl.(ClassBeanInfoImpl.java:183)
at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.getOrCreate(JAXBContextImpl.java:532)
at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:347)
at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
at 
com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at javax.xml.bind.ContextFinder.newInstance(Unknown Source)
at javax.xml.bind.ContextFinder.newInstance(Unknown Source)
at javax.xml.bind.ContextFinder.find(Unknown Source)
at javax.xml.bind.JAXBContext.newInstance(Unknown Source)
at javax.xml.bind.JAXBContext.newInstance(Unknown Source)
at 
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:412)
at 
com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.createExternalGrammar(WadlGeneratorJAXBGrammarGenerator.java:352)
at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:115)
at 
com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104)
at 
com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120)
at 
com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 

[jira] [Assigned] (YARN-7258) Add Node and Rack Hints to Opportunistic Scheduler

2017-09-28 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-7258:
-

Assignee: Arun Suresh  (was: kartheek muthyala)

> Add Node and Rack Hints to Opportunistic Scheduler
> --
>
> Key: YARN-7258
> URL: https://issues.apache.org/jira/browse/YARN-7258
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Currently, the Opportunistic Scheduler ignores the node and rack information 
> and allocates strictly on the least loaded node (based on queue length) at 
> the time it received the request. This JIRA is to track changes needed to 
> allow the OpportunisticContainerAllocator to take the node/rack name as hints.
> The flow would be:
> # If requested node found in the top K leastLoaded nodes, allocate on that 
> node
> # Else, allocate on least loaded node on the same rack from the top K least 
> Loaded nodes.
> # Else, allocate on least loaded node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7258) Add Node and Rack Hints to Opportunistic Scheduler

2017-09-28 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-7258:
-

Assignee: kartheek muthyala  (was: Arun Suresh)

> Add Node and Rack Hints to Opportunistic Scheduler
> --
>
> Key: YARN-7258
> URL: https://issues.apache.org/jira/browse/YARN-7258
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: kartheek muthyala
>
> Currently, the Opportunistic Scheduler ignores the node and rack information 
> and allocates strictly on the least loaded node (based on queue length) at 
> the time it received the request. This JIRA is to track changes needed to 
> allow the OpportunisticContainerAllocator to take the node/rack name as hints.
> The flow would be:
> # If requested node found in the top K leastLoaded nodes, allocate on that 
> node
> # Else, allocate on least loaded node on the same rack from the top K least 
> Loaded nodes.
> # Else, allocate on least loaded node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4727) Unable to override the $HADOOP_CONF_DIR env variable for container

2017-09-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4727:
--
Fix Version/s: (was: 3.0.0)
   3.0.0-beta1

> Unable to override the $HADOOP_CONF_DIR env variable for container
> --
>
> Key: YARN-4727
> URL: https://issues.apache.org/jira/browse/YARN-4727
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.4.1, 2.5.2, 2.7.2, 2.6.4, 2.8.1
>Reporter: Terence Yim
>Assignee: Jason Lowe
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0
>
> Attachments: YARN-4727.001.patch, YARN-4727.002.patch
>
>
> Given the default config of "yarn.nodemanager.env-whitelist", application 
> should be able to set the env variable $HADOOP_CONF_DIR to value other than 
> the one in the NodeManager system environment. However, I believe due to a 
> bug in the 
> {{org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch}}
>  class, it is not possible so.
> From the {{sanitizeEnv()}} method in the ContainerLaunch class 
> (https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L977)
> {noformat}
> putEnvIfNotNull(environment, 
> Environment.HADOOP_CONF_DIR.name(), 
> System.getenv(Environment.HADOOP_CONF_DIR.name())
> );
> if (!Shell.WINDOWS) {
>   environment.put("JVM_PID", "$$");
> }
> String[] whitelist = conf.get(YarnConfiguration.NM_ENV_WHITELIST, 
> YarnConfiguration.DEFAULT_NM_ENV_WHITELIST).split(",");
> 
> for(String whitelistEnvVariable : whitelist) {
>   putEnvIfAbsent(environment, whitelistEnvVariable.trim());
> }
> ...
> private static void putEnvIfAbsent(
> Map environment, String variable) {
>   if (environment.get(variable) == null) {
> putEnvIfNotNull(environment, variable, System.getenv(variable));
>   }
> }
> {noformat}
> So there two issues here.
> 1. the environment is already set with the system environment of the NM in 
> the {{putEnvIfNotNull}} call, hence the {{putEnvIfAbsent}} call will never 
> set it to some new value
> 2. Inside the {{putEnvIfAbsent}} call, it uses the system environment of the 
> NM, which it should be using the one from the {{launchContext}} instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184530#comment-16184530
 ] 

Andrew Wang commented on YARN-7248:
---

Gotcha, let's wait until post-beta1 then. Thanks Arun.

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-09-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184523#comment-16184523
 ] 

Jian He commented on YARN-7198:
---

And one comment we havn't addressed is the persistent storage for containers. 
That feature is in the plan, and TBD.

> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-09-28 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184519#comment-16184519
 ] 

Jian He commented on YARN-7198:
---

Hi [~aw], we added a more concrete httpd example in the 
documentation(Examples.md) and also other fixes in the doc.
If you can help to check the documentation and also this patch, it'll be great. 
We want to restart the vote after this patch is committed.  

> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7246) Fix the default docker binary path

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184509#comment-16184509
 ] 

Hadoop QA commented on YARN-7246:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
54s{color} | {color:green} branch-2.8.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.8.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.8.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 28s{color} | 
{color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
42s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d0a0f24 |
| JIRA Issue | YARN-7246 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889531/YARN-7246-branch-2.8.2.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux a660350d36de 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8.2 / 76e053a |
| Default Java | 1.7.0_151 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/17682/artifact/patchprocess/diff-compile-cc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17682/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17682/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Blocker
> Attachments: YARN-7246-branch-2.8.2.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184502#comment-16184502
 ] 

Hadoop QA commented on YARN-7198:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
39s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
53s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
20s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
26s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 47s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| 

[jira] [Commented] (YARN-6622) Document Docker work as experimental

2017-09-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184489#comment-16184489
 ] 

Andrew Wang commented on YARN-6622:
---

This was missed from branch-3.0, I cherry-picked it.

> Document Docker work as experimental
> 
>
> Key: YARN-6622
> URL: https://issues.apache.org/jira/browse/YARN-6622
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: documentation
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6622.001.patch
>
>
> We should update the Docker support documentation calling out the Docker work 
> as experimental.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7246) Fix the default docker binary path

2017-09-28 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7246:
--
Attachment: YARN-7246-branch-2.8.2.001.patch

> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Blocker
> Attachments: YARN-7246-branch-2.8.2.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7246) Fix the default docker binary path

2017-09-28 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184469#comment-16184469
 ] 

Shane Kumpf commented on YARN-7246:
---

Sorry for the back and forth on the patch uploads, another jenkins run is going 
now with the correct patch. Please ignore the one above.

> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Blocker
> Attachments: YARN-7246-branch-2.8.2.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184465#comment-16184465
 ] 

Arun Suresh commented on YARN-7248:
---

I don't mind creating a branch-3.0 patch, but unfortunately, this will not 
cleanly apply to branch-3.0 without first getting in YARN-5292 (YARN-5216 and 
YARN-6059 might also be required) and YARN-7240. I was waiting for Andrew to 
get beta1 out before pushing those in.

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7246) Fix the default docker binary path

2017-09-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184457#comment-16184457
 ] 

Hadoop QA commented on YARN-7246:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
36s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 46s{color} | 
{color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 38s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.TestNodeStatusUpdaterForLabels |
|   | hadoop.yarn.server.nodemanager.TestNodeStatusUpdater |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerShutdown |
|   | org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerResync |
|   | org.apache.hadoop.yarn.server.nodemanager.TestNodeManagerReboot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c2d96dd |
| JIRA Issue | YARN-7246 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12889510/YARN-7246-branch-2.8.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux e08620c60ede 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / fb5de93 |
| Default Java | 1.7.0_151 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/17679/artifact/patchprocess/diff-compile-cc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17679/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17679/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17679/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane 

[jira] [Commented] (YARN-7207) Cache the local host name when getting application list in RM

2017-09-28 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184437#comment-16184437
 ] 

Robert Kanter commented on YARN-7207:
-

I agree that it's a bad idea and ideally the user should fix that kind of 
misconfiguration.  However, users don't always do what's best for them :)
Even if this is covering up a host misconfiguration, if we have a simple fix to 
make Yarn not susceptible to the issue, I think that's a good thing.  YARN-7263 
should help warn the user of this misconfiguration, but really they should be 
using some other monitoring software they're hopefully running to alert them 
about it.

In any case, it does seem unnecessary to constantly be doing this lookup when 
the value shouldn't change.  Thousands of fewer calls sounds like a good thing 
to me.



> Cache the local host name when getting application list in RM
> -
>
> Key: YARN-7207
> URL: https://issues.apache.org/jira/browse/YARN-7207
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7207.001.patch, YARN-7207.002.patch
>
>
> {{getLocalHostName()}} is invoked for generating the report for each 
> application, which means it is called 1000 times for each 
> {{getApplications()}} if there are 1000 apps in RM. Some user got a 
> performance issue when {{getLocalHostName()}} is slow under some network envs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-7248:
--
Target Version/s: 2.9.0, 3.0.0, 3.1.0  (was: 2.9.0, 3.1.0)

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-6623:
--
Target Version/s: 2.9.0, 2.8.2, 3.0.0  (was: 2.9.0, 3.0.0-beta1, 2.8.2)

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch, YARN-6623.011.patch, 
> YARN-6623.012.patch, YARN-6623.013.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2017-09-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184433#comment-16184433
 ] 

Andrew Wang commented on YARN-6623:
---

Thanks Varun. Considering this is targeted at 2.8.2, if we're okay taking that 
incompatibility for a maintenance release, I'm okay taking it between beta1 and 
GA. I'll update the target version, though if beta1 drags on, we can try and 
squeeze this in.

> Add support to turn off launching privileged containers in the 
> container-executor
> -
>
> Key: YARN-6623
> URL: https://issues.apache.org/jira/browse/YARN-6623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>Priority: Blocker
> Attachments: YARN-6623.001.patch, YARN-6623.002.patch, 
> YARN-6623.003.patch, YARN-6623.004.patch, YARN-6623.005.patch, 
> YARN-6623.006.patch, YARN-6623.007.patch, YARN-6623.008.patch, 
> YARN-6623.009.patch, YARN-6623.010.patch, YARN-6623.011.patch, 
> YARN-6623.012.patch, YARN-6623.013.patch
>
>
> Currently, launching privileged containers is controlled by the NM. We should 
> add a flag to the container-executor.cfg allowing admins to disable launching 
> privileged containers at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2017-09-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184425#comment-16184425
 ] 

Andrew Wang commented on YARN-7248:
---

Thanks for the ping. Please put it in branch-3.0. Given that the issue seems 
intermittent, I don't want to hold beta1. It's not the worst thing if we can't 
rolling upgrade from beta1->GA if it fixes 2.x->GA rolling upgrade.

> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2497) Changes for fair scheduler to support allocate resource respect labels

2017-09-28 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-2497:
---
Attachment: YARN-2497.008.patch

See if this works better for you.

> Changes for fair scheduler to support allocate resource respect labels
> --
>
> Key: YARN-2497
> URL: https://issues.apache.org/jira/browse/YARN-2497
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Wangda Tan
>Assignee: Daniel Templeton
> Attachments: YARN-2497.001.patch, YARN-2497.002.patch, 
> YARN-2497.003.patch, YARN-2497.004.patch, YARN-2497.005.patch, 
> YARN-2497.006.patch, YARN-2497.007.patch, YARN-2497.008.patch, 
> YARN-2499.WIP01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >