[jira] [Commented] (YARN-4111) Killed application diagnostics message should be set rather having static mesage

2015-09-11 Thread nijel (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740313#comment-14740313
 ] 

nijel commented on YARN-4111:
-

Thanks [~sunilg] for the comments
bq. RMAppKilledAttemptEvent is used for both RMApp and RMAppAttempt. Name is 
slightly confusing. I think we can use this only for RMApp.
This is the same as failed and finished event. So i think this is ok.

bq. Also in RMAppAttempt, RMAppFailedAttemptEvent is changed to 
RMAppKilledAttemptEvent. Could we generalize RMAppFailedAttemptEvent for both 
Failed and Killed, and it can also take diagnostics.
before this fix failed event is raised with KILLED as state. SInce now the new 
event for kill is available it is changed.

> Killed application diagnostics message should be set rather having static 
> mesage
> 
>
> Key: YARN-4111
> URL: https://issues.apache.org/jira/browse/YARN-4111
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: nijel
> Attachments: YARN-4111_1.patch, YARN-4111_2.patch
>
>
> Application can be killed either by *user via ClientRMService* OR *from 
> scheduler*. Currently diagnostic message is set statically i.e {{Application 
> killed by user.}} neverthless of application killed by scheduler. This brings 
> the confusion to the user after application is Killed that he did not kill 
> application at all but diagnostic message depicts that 'application is killed 
> by user'.
> It would be useful if the diagnostic message are different for each cause of 
> KILL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4111) Killed application diagnostics message should be set rather having static mesage

2015-09-11 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated YARN-4111:

Attachment: YARN-4111_3.patch

Updated javadoc comments

> Killed application diagnostics message should be set rather having static 
> mesage
> 
>
> Key: YARN-4111
> URL: https://issues.apache.org/jira/browse/YARN-4111
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: nijel
> Attachments: YARN-4111_1.patch, YARN-4111_2.patch, YARN-4111_3.patch
>
>
> Application can be killed either by *user via ClientRMService* OR *from 
> scheduler*. Currently diagnostic message is set statically i.e {{Application 
> killed by user.}} neverthless of application killed by scheduler. This brings 
> the confusion to the user after application is Killed that he did not kill 
> application at all but diagnostic message depicts that 'application is killed 
> by user'.
> It would be useful if the diagnostic message are different for each cause of 
> KILL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741266#comment-14741266
 ] 

Hadoop QA commented on YARN-4126:
-

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m  4s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 4 new or modified test files. |
| {color:green}+1{color} | javac |   8m  3s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 13s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 52s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |  55m 22s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | |  95m 28s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755420/0006-YARN-4126.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ca0827a |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9090/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9090/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9090/console |


This message was automatically generated.

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3301) Fix the format issue of the new RM web UI and AHS web UI after YARN-3272 / YARN-3262

2015-09-11 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-3301:
--
Summary: Fix the format issue of the new RM web UI and AHS web UI after 
YARN-3272 / YARN-3262  (was: Fix the format issue of the new RM web UI and AHS 
web UI)

> Fix the format issue of the new RM web UI and AHS web UI after YARN-3272 / 
> YARN-3262
> 
>
> Key: YARN-3301
> URL: https://issues.apache.org/jira/browse/YARN-3301
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.7.1
>
> Attachments: Screen Shot 2015-04-21 at 5.09.25 PM.png, Screen Shot 
> 2015-04-21 at 5.38.39 PM.png, YARN-3301.1.patch, YARN-3301.2.patch, 
> YARN-3301.3.patch, YARN-3301.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741314#comment-14741314
 ] 

Anubhav Dhoot commented on YARN-4115:
-

The test failure looks unrelated. 

> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741212#comment-14741212
 ] 

Hadoop QA commented on YARN-3717:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  25m 37s | Pre-patch trunk has 7 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 8 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  7s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  6s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 45s | The applied patch generated  3 
new checkstyle issues (total was 16, now 18). |
| {color:green}+1{color} | whitespace |   0m 20s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   4m 17s | Post-patch findbugs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 compilation is broken. |
| {color:red}-1{color} | findbugs |   4m 31s | Post-patch findbugs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
compilation is broken. |
| {color:red}-1{color} | findbugs |   4m 45s | Post-patch findbugs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 compilation is broken. |
| {color:green}+1{color} | findbugs |   4m 45s | The patch does not introduce 
any new Findbugs (version ) warnings. |
| {color:red}-1{color} | yarn tests |   0m 14s | Tests failed in 
hadoop-yarn-api. |
| {color:red}-1{color} | yarn tests |   0m 13s | Tests failed in 
hadoop-yarn-client. |
| {color:red}-1{color} | yarn tests |   0m 20s | Tests failed in 
hadoop-yarn-common. |
| {color:red}-1{color} | yarn tests |   0m 13s | Tests failed in 
hadoop-yarn-server-applicationhistoryservice. |
| {color:red}-1{color} | yarn tests |   0m 14s | Tests failed in 
hadoop-yarn-server-common. |
| {color:red}-1{color} | yarn tests |   0m 19s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | |  59m 42s | |
\\
\\
|| Reason || Tests ||
| Failed build | hadoop-yarn-api |
|   | hadoop-yarn-client |
|   | hadoop-yarn-common |
|   | hadoop-yarn-server-applicationhistoryservice |
|   | hadoop-yarn-server-common |
|   | hadoop-yarn-server-resourcemanager |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755427/YARN-3717.20150911-1.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / ca0827a |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/artifact/patchprocess/trunkFindbugsWarningshadoop-yarn-server-common.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/9091/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
 |
| hadoop-yarn-api test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/artifact/patchprocess/testrun_hadoop-yarn-api.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-applicationhistoryservice test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/artifact/patchprocess/testrun_hadoop-yarn-server-applicationhistoryservice.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9091/console |


This message was automatically generated.

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
>

[jira] [Assigned] (YARN-3273) Improve web UI to facilitate scheduling analysis and debugging

2015-09-11 Thread Anubhav Dhoot (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Dhoot reassigned YARN-3273:
---

Assignee: Anubhav Dhoot  (was: Rohith Sharma K S)

> Improve web UI to facilitate scheduling analysis and debugging
> --
>
> Key: YARN-3273
> URL: https://issues.apache.org/jira/browse/YARN-3273
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jian He
>Assignee: Anubhav Dhoot
> Fix For: 2.7.0
>
> Attachments: 0001-YARN-3273-v1.patch, 0001-YARN-3273-v2.patch, 
> 0002-YARN-3273.patch, 0003-YARN-3273.patch, 0003-YARN-3273.patch, 
> 0004-YARN-3273.patch, YARN-3273-am-resource-used-AND-User-limit-v2.PNG, 
> YARN-3273-am-resource-used-AND-User-limit.PNG, 
> YARN-3273-application-headroom-v2.PNG, YARN-3273-application-headroom.PNG
>
>
> Job may be stuck for reasons such as:
> - hitting queue capacity 
> - hitting user-limit, 
> - hitting AM-resource-percentage 
> The  first queueCapacity is already shown on the UI.
> We may surface things like:
> - what is user's current usage and user-limit; 
> - what is the AM resource usage and limit;
> - what is the application's current HeadRoom;
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741320#comment-14741320
 ] 

Naganarasimha G R commented on YARN-3717:
-

thanks [~gtCarrera9],
planning to do it either in 4068/4129 .
In btw, if you have rights can kick the jenkins for this jira again ?

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741323#comment-14741323
 ] 

Naganarasimha G R commented on YARN-3717:
-

thanks [~gtCarrera9],
planning to do it either in 4068/4129 .
In btw, if you have rights can kick the jenkins for this jira again ?

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741321#comment-14741321
 ] 

Naganarasimha G R commented on YARN-3717:
-

thanks [~gtCarrera9],
planning to do it either in 4068/4129 .
In btw, if you have rights can kick the jenkins for this jira again ?

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741324#comment-14741324
 ] 

Naganarasimha G R commented on YARN-3717:
-

thanks [~gtCarrera9],
planning to do it either in 4068/4129 .
In btw, if you have rights can kick the jenkins for this jira again ?

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4149) yarn logs -am should provide an option to fetch all the log files

2015-09-11 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-4149:

Attachment: YARN-4149.001.patch

Allow "ALL" to be passed as an option for the logFiles argument, which in  turn 
leads to all the log files for the AM to be fetched.

[~xgong] - can you please review?

> yarn logs -am should provide an option to fetch all the log files
> -
>
> Key: YARN-4149
> URL: https://issues.apache.org/jira/browse/YARN-4149
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, nodemanager
>Affects Versions: 2.7.1
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-4149.001.patch
>
>
> From [~gopalv] -
> {quote}
> Trying to collect a hanging Tez AM logs, by killing the container and running 
> yarn logs -applicationId application_1437098194051_0178 -am ALL
> The output contains only one log file, which does not contain any of the 
> actual execution logs, only the initialization logs.
> From YARN-3347, I note that
>   // if we do not specify the value for CONTAINER_LOG_FILES option,
>  // we will only output syslog
> This means that the person calling the yarn logs command has to list it out 
> like this, to collect logs 
> yarn logs -applicationId application_1437098194051_0178 -am ALL -logFiles \
> syslog_dag_1437098194051_0178_2_post,\
> dag_1437098194051_0178_2-tez-dag.pb.txt,\
> syslog_dag_1437098194051_0178_2,\
> syslog_dag_1437098194051_0178_1_post,\
> syslog_dag_1437098194051_0178_1,\
> syslog,\
> stdout,\
> stderr,\
> dag_1437098194051_0178_2.dot,\
> dag_1437098194051_0178_1.dot,\
> dag_1437098194051_0178_1-tez-dag.pb.txt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741249#comment-14741249
 ] 

Li Lu commented on YARN-3717:
-

bq. For ATS V2 planning to raise a subjira (or if possible in one of the 
existing jiras) under YARN-2928.
Sure, thanks for the work! Let's firstly try to squeeze this change in some 
JIRAs maybe? If it's hard then feel free to open a new one. 

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741310#comment-14741310
 ] 

Robert Kanter commented on YARN-4145:
-

+1 LGTM

> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4142) add a way for an attempt to report an attempt failure

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741095#comment-14741095
 ] 

Steve Loughran commented on YARN-4142:
--

—exactly, I'm thinking we could do with some intermediate message on AM attempt 
failure. 

As an example, the [Spark 
AM|https://github.com/apache/spark/blob/master/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala#L358]
 logs a warning of a problem, but can't report anything until the final exit

> add a way for an attempt to report an attempt failure
> -
>
> Key: YARN-4142
> URL: https://issues.apache.org/jira/browse/YARN-4142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Currently AMs can report a failure with exit code and diagnostics text —but 
> only when exiting to a failed state. If the AM terminates for any other 
> reason there's no information held in the RM, just the logs somewhere —and we 
> know they don't always last.
> When an application explicitly terminates an attempt, it would be nice if it 
> could  optionally report something to the RM before it exited. The most 
> recent set of these could then be included in Application Reports, so 
> allowing client apps to count attempt failures and get exit details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3301) Fix the format issue of the new RM web UI and AHS web UI

2015-09-11 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-3301:
--
Fix Version/s: 2.7.1

[~jianhe] missed setting the fix-version at commit-time, setting it.

> Fix the format issue of the new RM web UI and AHS web UI
> 
>
> Key: YARN-3301
> URL: https://issues.apache.org/jira/browse/YARN-3301
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.7.1
>
> Attachments: Screen Shot 2015-04-21 at 5.09.25 PM.png, Screen Shot 
> 2015-04-21 at 5.38.39 PM.png, YARN-3301.1.patch, YARN-3301.2.patch, 
> YARN-3301.3.patch, YARN-3301.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741322#comment-14741322
 ] 

Naganarasimha G R commented on YARN-3717:
-

thanks [~gtCarrera9],
planning to do it either in 4068/4129 .
In btw, if you have rights can kick the jenkins for this jira again ?

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741318#comment-14741318
 ] 

Robert Kanter commented on YARN-4115:
-

+1 LGTM

> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3418) AM to be able to set/update web URL and IPC ports post-registration

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741067#comment-14741067
 ] 

Steve Loughran commented on YARN-3418:
--

...catching up on this. I quite like the idea of reusing the same message.

# no new messages, protobuf pain...
# API is available to all apps built against Hadoop 2.2+ JARs: no linking 
problems

> AM to be able to set/update web URL and IPC ports post-registration
> ---
>
> Key: YARN-3418
> URL: https://issues.apache.org/jira/browse/YARN-3418
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Rohith Sharma K S
>
> Currently the AM can only set the IPC and HTTP(s) ports on AM registration.
> This
> # creates a possible race condition: the IPC and HTTP ports need to come up 
> before the app is fully initialised. This is particularly true on 
> work-preserving AM restarts, as the AM will depend on the list of containers 
> supplied during registration to build its internal state. 
> # prevents the AM from changing these values dynamically during application 
> execution. This matters if the Web or IPC services are hosted not in the AM, 
> but in a deployed container. If the container is restarted, there's no way to 
> rebind the services. 
> A new AM-RM IPC call to publish updated binding information is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4149) yarn logs -am should provide an option to fetch all the log files

2015-09-11 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-4149:
---

 Summary: yarn logs -am should provide an option to fetch all the 
log files
 Key: YARN-4149
 URL: https://issues.apache.org/jira/browse/YARN-4149
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client, nodemanager
Affects Versions: 2.7.1
Reporter: Varun Vasudev
Assignee: Varun Vasudev


>From [~gopalv] -

{quote}
Trying to collect a hanging Tez AM logs, by killing the container and running 
yarn logs -applicationId application_1437098194051_0178 -am ALL

The output contains only one log file, which does not contain any of the actual 
execution logs, only the initialization logs.

>From YARN-3347, I note that

  // if we do not specify the value for CONTAINER_LOG_FILES option,
 // we will only output syslog

This means that the person calling the yarn logs command has to list it out 
like this, to collect logs 

yarn logs -applicationId application_1437098194051_0178 -am ALL -logFiles \
syslog_dag_1437098194051_0178_2_post,\
dag_1437098194051_0178_2-tez-dag.pb.txt,\
syslog_dag_1437098194051_0178_2,\
syslog_dag_1437098194051_0178_1_post,\
syslog_dag_1437098194051_0178_1,\
syslog,\
stdout,\
stderr,\
dag_1437098194051_0178_2.dot,\
dag_1437098194051_0178_1.dot,\
dag_1437098194051_0178_1-tez-dag.pb.txt

{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741309#comment-14741309
 ] 

Anubhav Dhoot commented on YARN-4145:
-

The timed out tests are not related to this base class.

> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4136) LinuxContainerExecutor loses info when forwarding ResourceHandlerException

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740521#comment-14740521
 ] 

Hudson commented on YARN-4136:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2319 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2319/])
YARN-4136. LinuxContainerExecutor loses info when forwarding 
ResourceHandlerException. Contributed by Bibin A Chundatt. (vvasudev: rev 
486d5cb803efec7b4db445ee65a3df83392940a3)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java


> LinuxContainerExecutor loses info when forwarding ResourceHandlerException
> --
>
> Key: YARN-4136
> URL: https://issues.apache.org/jira/browse/YARN-4136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4136.patch
>
>
> The Linux container executor {{launchContainer}} method throws 
> {{ResourceHandlerException}} when there are problems setting up the container 
> -but these aren't propagated in the raised IOE. They should be nested with 
> the string value included in the message text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4147) create a yarn.troubleshooting log for logging app launching problems

2015-09-11 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-4147:


 Summary: create a yarn.troubleshooting log for logging app 
launching problems
 Key: YARN-4147
 URL: https://issues.apache.org/jira/browse/YARN-4147
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Affects Versions: 2.7.1
Reporter: Steve Loughran
Priority: Minor


We can do more in the RM and NMs of logging what's going on with containers 
—but it gets very noisy in a large cluster.

I propose a {{org.apache.hadoop.yarn.troubleshooting}} log to which the YARN 
services can log at debug to: by default these would then not get printed. Turn 
that log to DEBUG and you can get the troubleshooting info —without getting the 
low-level debug messages from everything else.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-313:
-
Attachment: YARN-313-v10.patch

Tackling [~djp] comments.

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v2.patch, YARN-313-v3.patch, YARN-313-v4.patch, 
> YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, YARN-313-v8.patch, 
> YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4146) getServiceState command is missing in yarnadmin command help

2015-09-11 Thread nijel (JIRA)
nijel created YARN-4146:
---

 Summary: getServiceState command  is missing in yarnadmin command 
help
 Key: YARN-4146
 URL: https://issues.apache.org/jira/browse/YARN-4146
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: nijel
Assignee: nijel
Priority: Minor


In yarnadmin command help getServiceState command is not mentioned.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3933) Race condition when calling AbstractYarnScheduler.completedContainer.

2015-09-11 Thread Shiwei Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740520#comment-14740520
 ] 

Shiwei Guo commented on YARN-3933:
--

[~djp], would you like to review the patch or give me some pointer for the next 
step to do ?

> Race condition when calling AbstractYarnScheduler.completedContainer.
> -
>
> Key: YARN-3933
> URL: https://issues.apache.org/jira/browse/YARN-3933
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0, 2.7.0, 2.5.2, 2.7.1
>Reporter: Lavkesh Lahngir
>Assignee: Shiwei Guo
>  Labels: patch
> Attachments: YARN-3933.001.patch
>
>
> In our cluster we are seeing available memory and cores being negative. 
> Initial inspection:
> Scenario no. 1: 
> In capacity scheduler the method allocateContainersToNode() checks if 
> there are excess reservation of containers for an application, and they are 
> no longer needed then it calls queue.completedContainer() which causes 
> resources being negative. And they were never assigned in the first place. 
> I am still looking through the code. Can somebody suggest how to simulate 
> excess containers assignments ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740323#comment-14740323
 ] 

Inigo Goiri commented on YARN-313:
--

The patch I'm going to submit more or less fixes all of them, some comments:
# Not sure how to handle what is Unstable now so I just changed Stable to 
Evolving  and left the others. It's not very consistent between Response and 
Request.
# Done.
# I originally skipped those because of consistency with all the other similar 
classes/functions; none of them had comments and all the others have the 
"public". Anyway, I fixed them.

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v2.patch, YARN-313-v3.patch, YARN-313-v4.patch, YARN-313-v5.patch, 
> YARN-313-v6.patch, YARN-313-v7.patch, YARN-313-v8.patch, YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4111) Killed application diagnostics message should be set rather having static mesage

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740437#comment-14740437
 ] 

Hadoop QA commented on YARN-4111:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 34s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   9m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 28s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  1s | The applied patch generated  4 
new checkstyle issues (total was 299, now 301). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 49s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 44s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  51m 22s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 100m 20s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | 
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
 |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestRM |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755346/YARN-4111_3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f103a70 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/9086/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9086/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9086/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9086/console |


This message was automatically generated.

> Killed application diagnostics message should be set rather having static 
> mesage
> 
>
> Key: YARN-4111
> URL: https://issues.apache.org/jira/browse/YARN-4111
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: nijel
> Attachments: YARN-4111_1.patch, YARN-4111_2.patch, YARN-4111_3.patch
>
>
> Application can be killed either by *user via ClientRMService* OR *from 
> scheduler*. Currently diagnostic message is set statically i.e {{Application 
> killed by user.}} neverthless of application killed by scheduler. This brings 
> the confusion to the user after application is Killed that he did not kill 
> application at all but diagnostic message depicts that 'application is killed 
> by user'.
> It would be useful if the diagnostic message are different for each cause of 
> KILL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4111) Killed application diagnostics message should be set rather having static mesage

2015-09-11 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel updated YARN-4111:

Attachment: YARN-4111_4.patch

updated the javadoc for missing "."

Test skip is not related to patch

> Killed application diagnostics message should be set rather having static 
> mesage
> 
>
> Key: YARN-4111
> URL: https://issues.apache.org/jira/browse/YARN-4111
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: nijel
> Attachments: YARN-4111_1.patch, YARN-4111_2.patch, YARN-4111_3.patch, 
> YARN-4111_4.patch
>
>
> Application can be killed either by *user via ClientRMService* OR *from 
> scheduler*. Currently diagnostic message is set statically i.e {{Application 
> killed by user.}} neverthless of application killed by scheduler. This brings 
> the confusion to the user after application is Killed that he did not kill 
> application at all but diagnostic message depicts that 'application is killed 
> by user'.
> It would be useful if the diagnostic message are different for each cause of 
> KILL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4146) getServiceState command is missing in yarnadmin command help

2015-09-11 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740514#comment-14740514
 ] 

Bibin A Chundatt commented on YARN-4146:


[~nijel] could you please check its in HA

> getServiceState command  is missing in yarnadmin command help
> -
>
> Key: YARN-4146
> URL: https://issues.apache.org/jira/browse/YARN-4146
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
>  Labels: help, script
>
> In yarnadmin command help getServiceState command is not mentioned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740465#comment-14740465
 ] 

Hadoop QA commented on YARN-313:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 17s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 13s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 14s | The applied patch generated  1 
new checkstyle issues (total was 230, now 230). |
| {color:green}+1{color} | whitespace |   0m  8s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 44s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |   0m 23s | Tests passed in 
hadoop-yarn-api. |
| {color:green}+1{color} | yarn tests |   6m 56s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   1m 59s | Tests passed in 
hadoop-yarn-common. |
| {color:green}+1{color} | yarn tests |  54m 24s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | | 113m 26s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755350/YARN-313-v10.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f103a70 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/9087/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
 |
| hadoop-yarn-api test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9087/artifact/patchprocess/testrun_hadoop-yarn-api.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9087/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9087/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9087/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9087/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9087/console |


This message was automatically generated.

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v2.patch, YARN-313-v3.patch, YARN-313-v4.patch, 
> YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, YARN-313-v8.patch, 
> YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4136) LinuxContainerExecutor loses info when forwarding ResourceHandlerException

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740459#comment-14740459
 ] 

Hudson commented on YARN-4136:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8433 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8433/])
YARN-4136. LinuxContainerExecutor loses info when forwarding 
ResourceHandlerException. Contributed by Bibin A Chundatt. (vvasudev: rev 
486d5cb803efec7b4db445ee65a3df83392940a3)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* hadoop-yarn-project/CHANGES.txt


> LinuxContainerExecutor loses info when forwarding ResourceHandlerException
> --
>
> Key: YARN-4136
> URL: https://issues.apache.org/jira/browse/YARN-4136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4136.patch
>
>
> The Linux container executor {{launchContainer}} method throws 
> {{ResourceHandlerException}} when there are problems setting up the container 
> -but these aren't propagated in the raised IOE. They should be nested with 
> the string value included in the message text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4136) LinuxContainerExecutor loses info when forwarding ResourceHandlerException

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740501#comment-14740501
 ] 

Hudson commented on YARN-4136:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #377 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/377/])
YARN-4136. LinuxContainerExecutor loses info when forwarding 
ResourceHandlerException. Contributed by Bibin A Chundatt. (vvasudev: rev 
486d5cb803efec7b4db445ee65a3df83392940a3)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java


> LinuxContainerExecutor loses info when forwarding ResourceHandlerException
> --
>
> Key: YARN-4136
> URL: https://issues.apache.org/jira/browse/YARN-4136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4136.patch
>
>
> The Linux container executor {{launchContainer}} method throws 
> {{ResourceHandlerException}} when there are problems setting up the container 
> -but these aren't propagated in the raised IOE. They should be nested with 
> the string value included in the message text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-4146) getServiceState command is missing in yarnadmin command help

2015-09-11 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel resolved YARN-4146.
-
Resolution: Invalid

Sorry.
My env was in non HA mode ! 

> getServiceState command  is missing in yarnadmin command help
> -
>
> Key: YARN-4146
> URL: https://issues.apache.org/jira/browse/YARN-4146
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
>Priority: Minor
>  Labels: help, script
>
> In yarnadmin command help getServiceState command is not mentioned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-4146) getServiceState command is missing in yarnadmin command help

2015-09-11 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel reassigned YARN-4146:
---

Assignee: (was: nijel)

> getServiceState command  is missing in yarnadmin command help
> -
>
> Key: YARN-4146
> URL: https://issues.apache.org/jira/browse/YARN-4146
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: nijel
>Priority: Minor
>  Labels: help, script
>
> In yarnadmin command help getServiceState command is not mentioned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4150) Failure in TestNMClient because nodereports were not available

2015-09-11 Thread Anubhav Dhoot (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Dhoot updated YARN-4150:

Description: 
Saw a failure in a test run
https://builds.apache.org/job/PreCommit-YARN-Build/9010/testReport/

java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.hadoop.yarn.client.api.impl.TestNMClient.allocateContainers(TestNMClient.java:244)
at 
org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:210)

  was:
Saw a failure in a test run



> Failure in TestNMClient because nodereports were not available
> --
>
> Key: YARN-4150
> URL: https://issues.apache.org/jira/browse/YARN-4150
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>
> Saw a failure in a test run
> https://builds.apache.org/job/PreCommit-YARN-Build/9010/testReport/
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.allocateContainers(TestNMClient.java:244)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:210)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-313:
-
Attachment: YARN-313-v11.patch

RefreshResource -> RefreshNodesResources

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v11.patch, YARN-313-v2.patch, YARN-313-v3.patch, 
> YARN-313-v4.patch, YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, 
> YARN-313-v8.patch, YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4150) Failure in TestNMClient because nodereports were not available

2015-09-11 Thread Anubhav Dhoot (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Dhoot updated YARN-4150:

Attachment: YARN-4150.001.patch

Simple fix to wait for nodemanagers to be up before trying to get the 
nodereports.

> Failure in TestNMClient because nodereports were not available
> --
>
> Key: YARN-4150
> URL: https://issues.apache.org/jira/browse/YARN-4150
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-4150.001.patch
>
>
> Saw a failure in a test run
> https://builds.apache.org/job/PreCommit-YARN-Build/9010/testReport/
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.allocateContainers(TestNMClient.java:244)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:210)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4149) yarn logs -am should provide an option to fetch all the log files

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741345#comment-14741345
 ] 

Hadoop QA commented on YARN-4149:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 30s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 48s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 11s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 52s | The applied patch generated  2 
new checkstyle issues (total was 40, now 41). |
| {color:red}-1{color} | checkstyle |   1m 11s | The applied patch generated  6 
new checkstyle issues (total was 19, now 25). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m  9s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |   6m 54s | Tests failed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   7m 41s | Tests passed in 
hadoop-yarn-server-nodemanager. |
| | |  55m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.yarn.client.cli.TestLogsCLI |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755446/YARN-4149.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15a557f |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/9092/artifact/patchprocess/diffcheckstylehadoop-yarn-client.txt
 
https://builds.apache.org/job/PreCommit-YARN-Build/9092/artifact/patchprocess/diffcheckstylehadoop-yarn-server-nodemanager.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9092/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-server-nodemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9092/artifact/patchprocess/testrun_hadoop-yarn-server-nodemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9092/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9092/console |


This message was automatically generated.

> yarn logs -am should provide an option to fetch all the log files
> -
>
> Key: YARN-4149
> URL: https://issues.apache.org/jira/browse/YARN-4149
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client, nodemanager
>Affects Versions: 2.7.1
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-4149.001.patch
>
>
> From [~gopalv] -
> {quote}
> Trying to collect a hanging Tez AM logs, by killing the container and running 
> yarn logs -applicationId application_1437098194051_0178 -am ALL
> The output contains only one log file, which does not contain any of the 
> actual execution logs, only the initialization logs.
> From YARN-3347, I note that
>   // if we do not specify the value for CONTAINER_LOG_FILES option,
>  // we will only output syslog
> This means that the person calling the yarn logs command has to list it out 
> like this, to collect logs 
> yarn logs -applicationId application_1437098194051_0178 -am ALL -logFiles \
> syslog_dag_1437098194051_0178_2_post,\
> dag_1437098194051_0178_2-tez-dag.pb.txt,\
> syslog_dag_1437098194051_0178_2,\
> syslog_dag_1437098194051_0178_1_post,\
> syslog_dag_1437098194051_0178_1,\
> syslog,\
> stdout,\
> stderr,\
> dag_1437098194051_0178_2.dot,\
> dag_1437098194051_0178_1.dot,\
> dag_1437098194051_0178_1-tez-dag.pb.txt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-1651:
-
Attachment: YARN-1651-7.YARN-1197.patch

Attached ver.7 patch addressed all comments.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch, YARN-1651-7.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741371#comment-14741371
 ] 

Wangda Tan commented on YARN-1651:
--

Also rebased YARN-1197 branch to latest trunk.

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch, YARN-1651-7.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741441#comment-14741441
 ] 

Hudson commented on YARN-4115:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8436 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8436/])
YARN-4115. Reduce loglevel of ContainerManagementProtocolProxy to Debug (adhoot 
via rkanter) (rkanter: rev b84fb41bb6ca2d69153cf5bd61f88492538ee713)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* hadoop-yarn-project/CHANGES.txt


> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4150) Failure in TestNMClient because nodereports were not available

2015-09-11 Thread Anubhav Dhoot (JIRA)
Anubhav Dhoot created YARN-4150:
---

 Summary: Failure in TestNMClient because nodereports were not 
available
 Key: YARN-4150
 URL: https://issues.apache.org/jira/browse/YARN-4150
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Anubhav Dhoot
Assignee: Anubhav Dhoot


Saw a failure in a test run




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741331#comment-14741331
 ] 

Wangda Tan commented on YARN-3717:
--

Rekicked Jenkins.

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3985) Make ReservationSystem persist state using RMStateStore reservation APIs

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741479#comment-14741479
 ] 

Hadoop QA commented on YARN-3985:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 55s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 12 new or modified test files. |
| {color:green}+1{color} | javac |   8m  6s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 55s | The applied patch generated  1 
new checkstyle issues (total was 23, now 22). |
| {color:green}+1{color} | whitespace |   0m  6s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  54m 52s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | |  95m  7s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.reservation.planning.TestAlignedPlanner |
|   | 
hadoop.yarn.server.resourcemanager.reservation.planning.TestSimpleCapacityReplanner
 |
|   | 
hadoop.yarn.server.resourcemanager.reservation.planning.TestGreedyReservationAgent
 |
|   | hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755287/YARN-3985.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15a557f |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/9093/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9093/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9093/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9093/console |


This message was automatically generated.

> Make ReservationSystem persist state using RMStateStore reservation APIs 
> -
>
> Key: YARN-3985
> URL: https://issues.apache.org/jira/browse/YARN-3985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-3985.001.patch
>
>
> YARN-3736 adds the RMStateStore apis to store and load reservation state. 
> This jira adds the actual storing of state from ReservationSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741334#comment-14741334
 ] 

Anubhav Dhoot commented on YARN-4115:
-

The test passes for me locally. Opened YARN-4150 for fixing the test which 
seems to have a race.

> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4150) Failure in TestNMClient because nodereports were not available

2015-09-11 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741338#comment-14741338
 ] 

Anubhav Dhoot commented on YARN-4150:
-

This is most likely due to the test reading the node reports before the 
nodemanagers are ready

> Failure in TestNMClient because nodereports were not available
> --
>
> Key: YARN-4150
> URL: https://issues.apache.org/jira/browse/YARN-4150
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>
> Saw a failure in a test run
> https://builds.apache.org/job/PreCommit-YARN-Build/9010/testReport/
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.allocateContainers(TestNMClient.java:244)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:210)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741489#comment-14741489
 ] 

Hudson commented on YARN-4115:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #380 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/380/])
YARN-4115. Reduce loglevel of ContainerManagementProtocolProxy to Debug (adhoot 
via rkanter) (rkanter: rev b84fb41bb6ca2d69153cf5bd61f88492538ee713)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java


> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741490#comment-14741490
 ] 

Hudson commented on YARN-4145:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #380 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/380/])
YARN-4145. Make RMHATestBase abstract so its not run when running all tests 
under that namespace (adhoot via rkanter) (rkanter: rev 
ea4bb2749f966a5eaf712d1dbb2c845df0f5ca67)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java


> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741442#comment-14741442
 ] 

Hudson commented on YARN-4145:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8436 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8436/])
YARN-4145. Make RMHATestBase abstract so its not run when running all tests 
under that namespace (adhoot via rkanter) (rkanter: rev 
ea4bb2749f966a5eaf712d1dbb2c845df0f5ca67)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java


> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3901) Populate flow run data in the flow_run & flow activity tables

2015-09-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741507#comment-14741507
 ] 

Sangjin Lee commented on YARN-3901:
---

Thanks [~vrushalic] for updating the patch! I did a quick review (I know you'll 
be making bit more changes for findbugs, etc.), and wanted to share feedback.

(ColumnHelper.java)
- l.70: if the timestamp is null, the *current* timestamp (not server) is used, 
right? So we should update this comment?
- l.99,104: let's use primitive long over object Long
- l.99: does this need to be non-private?

(FlowRunCoprocessor.java)
- l.146: Since {{Cell.getTimestamp()}} returns a primitive long, it will never 
be a null Long object. I remember [~jrottinghuis] mentioning that an unset 
timestamp is equivalent to {{Cell.getTimestamp()}} returning Long.MAX_VALUE. 
[~jrottinghuis]?

(TimestampGenerator.java)
- If we're going to have {{ColumnHelper}} use this, I suggest moving this to 
the storage.common package.

> Populate flow run data in the flow_run & flow activity tables
> -
>
> Key: YARN-3901
> URL: https://issues.apache.org/jira/browse/YARN-3901
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-3901-YARN-2928.1.patch, 
> YARN-3901-YARN-2928.2.patch, YARN-3901-YARN-2928.3.patch, 
> YARN-3901-YARN-2928.4.patch, YARN-3901-YARN-2928.5.patch, 
> YARN-3901-YARN-2928.6.patch
>
>
> As per the schema proposed in YARN-3815 in 
> https://issues.apache.org/jira/secure/attachment/12743391/hbase-schema-proposal-for-aggregation.pdf
> filing jira to track creation and population of data in the flow run table. 
> Some points that are being  considered:
> - Stores per flow run information aggregated across applications, flow version
> RM’s collector writes to on app creation and app completion
> - Per App collector writes to it for metric updates at a slower frequency 
> than the metric updates to application table
> primary key: cluster ! user ! flow ! flow run id
> - Only the latest version of flow-level aggregated metrics will be kept, even 
> if the entity and application level keep a timeseries.
> - The running_apps column will be incremented on app creation, and 
> decremented on app completion.
> - For min_start_time the RM writer will simply write a value with the tag for 
> the applicationId. A coprocessor will return the min value of all written 
> values. - 
> - Upon flush and compactions, the min value between all the cells of this 
> column will be written to the cell without any tag (empty tag) and all the 
> other cells will be discarded.
> - Ditto for the max_end_time, but then the max will be kept.
> - Tags are represented as #type:value. The type can be not set (0), or can 
> indicate running (1) or complete (2). In those cases (for metrics) only 
> complete app metrics are collapsed on compaction.
> - The m! values are aggregated (summed) upon read. Only when applications are 
> completed (indicated by tag type 2) can the values be collapsed.
> - The application ids that have completed and been aggregated into the flow 
> numbers are retained in a separate column for historical tracking: we don’t 
> want to re-aggregate for those upon replay
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3901) Populate flow run data in the flow_run & flow activity tables

2015-09-11 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741575#comment-14741575
 ] 

Vrushali C commented on YARN-3901:
--

Thanks Sangjin, I will make these updates. I am also looking at the patch with 
Joep and hope to have an updated patch shortly. 

bq. Since Cell.getTimestamp() returns a primitive long, it will never be a null 
Long object. I remember Joep Rottinghuis mentioning that an unset timestamp is 
equivalent to Cell.getTimestamp() returning Long.MAX_VALUE. 
Yes, I will update this. 

> Populate flow run data in the flow_run & flow activity tables
> -
>
> Key: YARN-3901
> URL: https://issues.apache.org/jira/browse/YARN-3901
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-3901-YARN-2928.1.patch, 
> YARN-3901-YARN-2928.2.patch, YARN-3901-YARN-2928.3.patch, 
> YARN-3901-YARN-2928.4.patch, YARN-3901-YARN-2928.5.patch, 
> YARN-3901-YARN-2928.6.patch
>
>
> As per the schema proposed in YARN-3815 in 
> https://issues.apache.org/jira/secure/attachment/12743391/hbase-schema-proposal-for-aggregation.pdf
> filing jira to track creation and population of data in the flow run table. 
> Some points that are being  considered:
> - Stores per flow run information aggregated across applications, flow version
> RM’s collector writes to on app creation and app completion
> - Per App collector writes to it for metric updates at a slower frequency 
> than the metric updates to application table
> primary key: cluster ! user ! flow ! flow run id
> - Only the latest version of flow-level aggregated metrics will be kept, even 
> if the entity and application level keep a timeseries.
> - The running_apps column will be incremented on app creation, and 
> decremented on app completion.
> - For min_start_time the RM writer will simply write a value with the tag for 
> the applicationId. A coprocessor will return the min value of all written 
> values. - 
> - Upon flush and compactions, the min value between all the cells of this 
> column will be written to the cell without any tag (empty tag) and all the 
> other cells will be discarded.
> - Ditto for the max_end_time, but then the max will be kept.
> - Tags are represented as #type:value. The type can be not set (0), or can 
> indicate running (1) or complete (2). In those cases (for metrics) only 
> complete app metrics are collapsed on compaction.
> - The m! values are aggregated (summed) upon read. Only when applications are 
> completed (indicated by tag type 2) can the values be collapsed.
> - The application ids that have completed and been aggregated into the flow 
> numbers are retained in a separate column for historical tracking: we don’t 
> want to re-aggregate for those upon replay
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3212) RMNode State Transition Update with DECOMMISSIONING state

2015-09-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741738#comment-14741738
 ] 

Wangda Tan commented on YARN-3212:
--

Hi [~djp],

Thanks for working on this JIRA, just took a look at it:

*1) ResourceTrackerService:*
Question:
1. Why shutdown a "decommissioning" NM if it is doing heartbeat. Should we 
allow it continue heartbeat, since RM needs to know about container finished / 
killed information.

*2) RMNodeImpl:*
Question:
2. Do we have timeout of graceful decomission? Which will update a node to 
"DECOMMISSIONED" after the timeout.
3. If I understand correct, decommissioning is another running state, except:
- We cannot allocate any new containers to it.

Comments:
- If answer to question #2 is no, I suggest to rename 
RMNodeEventType.DECOMISSION_WITH_TIMEOUT to GRACEFUL_DECOMISSION, since it 
doesn't have a "real" timeout.
- Why this is need?
{code}
  .addTransition(NodeState.DECOMMISSIONING, NodeState.DECOMMISSIONING,
  RMNodeEventType.DECOMMISSION_WITH_TIMEOUT,
  new DecommissioningNodeTransition(NodeState.DECOMMISSIONING))
{code}
Should we simply ignore the DECOMMISSION_WITH_TIMEOUT event?
- Is there specific considerations that transfer UNHEALTHY to DECOMISSIONED 
when DECOMMISSION_WITH_TIMEOUT received? Is it better to transfer it to 
DECOMISSIONING since it has some containers running on it?
- One suggestion of how to handle node update to scheduler: I think you can add 
a field "isDecomissioning" to NodeUpdateSchedulerEvent, and scheduler can do 
all updates except allocate container.

> RMNode State Transition Update with DECOMMISSIONING state
> -
>
> Key: YARN-3212
> URL: https://issues.apache.org/jira/browse/YARN-3212
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: RMNodeImpl - new.png, YARN-3212-v1.patch, 
> YARN-3212-v2.patch, YARN-3212-v3.patch, YARN-3212-v4.1.patch, 
> YARN-3212-v4.patch, YARN-3212-v5.1.patch, YARN-3212-v5.patch
>
>
> As proposed in YARN-914, a new state of “DECOMMISSIONING” will be added and 
> can transition from “running” state triggered by a new event - 
> “decommissioning”. 
> This new state can be transit to state of “decommissioned” when 
> Resource_Update if no running apps on this NM or NM reconnect after restart. 
> Or it received DECOMMISSIONED event (after timeout from CLI).
> In addition, it can back to “running” if user decides to cancel previous 
> decommission by calling recommission on the same node. The reaction to other 
> events is similar to RUNNING state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741583#comment-14741583
 ] 

Hadoop QA commented on YARN-3717:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  25m 38s | Pre-patch trunk has 7 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 8 new or modified test files. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  8s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  8s | Site still builds. |
| {color:red}-1{color} | checkstyle |   2m 40s | The applied patch generated  3 
new checkstyle issues (total was 16, now 18). |
| {color:green}+1{color} | whitespace |   0m 20s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   7m 45s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |   0m 23s | Tests passed in 
hadoop-yarn-api. |
| {color:green}+1{color} | yarn tests |   7m  0s | Tests passed in 
hadoop-yarn-client. |
| {color:red}-1{color} | yarn tests |   1m 48s | Tests failed in 
hadoop-yarn-common. |
| {color:red}-1{color} | yarn tests |   3m 46s | Tests failed in 
hadoop-yarn-server-applicationhistoryservice. |
| {color:green}+1{color} | yarn tests |   0m 25s | Tests passed in 
hadoop-yarn-server-common. |
| {color:red}-1{color} | yarn tests |  58m  8s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 132m 34s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.yarn.client.api.impl.TestTimelineClient |
|   | hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
|
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebAppFairScheduler |
| Timed out tests | 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerQueueACLs
 |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755427/YARN-3717.20150911-1.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / ea4bb27 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/artifact/patchprocess/trunkFindbugsWarningshadoop-yarn-server-common.html
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/9094/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
 |
| hadoop-yarn-api test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/artifact/patchprocess/testrun_hadoop-yarn-api.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-applicationhistoryservice test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/artifact/patchprocess/testrun_hadoop-yarn-server-applicationhistoryservice.txt
 |
| hadoop-yarn-server-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/artifact/patchprocess/testrun_hadoop-yarn-server-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9094/console |


This message was automatically generated.

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, 

[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741584#comment-14741584
 ] 

Hudson commented on YARN-4115:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1112 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1112/])
YARN-4115. Reduce loglevel of ContainerManagementProtocolProxy to Debug (adhoot 
via rkanter) (rkanter: rev b84fb41bb6ca2d69153cf5bd61f88492538ee713)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java


> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741585#comment-14741585
 ] 

Hudson commented on YARN-4145:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1112 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1112/])
YARN-4145. Make RMHATestBase abstract so its not run when running all tests 
under that namespace (adhoot via rkanter) (rkanter: rev 
ea4bb2749f966a5eaf712d1dbb2c845df0f5ca67)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java
* hadoop-yarn-project/CHANGES.txt


> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741648#comment-14741648
 ] 

Hudson commented on YARN-4145:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #374 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/374/])
YARN-4145. Make RMHATestBase abstract so its not run when running all tests 
under that namespace (adhoot via rkanter) (rkanter: rev 
ea4bb2749f966a5eaf712d1dbb2c845df0f5ca67)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java


> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741647#comment-14741647
 ] 

Hudson commented on YARN-4115:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #374 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/374/])
YARN-4115. Reduce loglevel of ContainerManagementProtocolProxy to Debug (adhoot 
via rkanter) (rkanter: rev b84fb41bb6ca2d69153cf5bd61f88492538ee713)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* hadoop-yarn-project/CHANGES.txt


> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3223) Resource update during NM graceful decommission

2015-09-11 Thread Brook Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brook Zhou updated YARN-3223:
-
Attachment: (was: YARN-3223-v0.1.patch)

> Resource update during NM graceful decommission
> ---
>
> Key: YARN-3223
> URL: https://issues.apache.org/jira/browse/YARN-3223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.7.1
>Reporter: Junping Du
>Assignee: Brook Zhou
>
> During NM graceful decommission, we should handle resource update properly, 
> include: make RMNode keep track of old resource for possible rollback, keep 
> available resource to 0 and used resource get updated when
> container finished.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3223) Resource update during NM graceful decommission

2015-09-11 Thread Brook Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brook Zhou updated YARN-3223:
-
Attachment: YARN-3223-v1.patch

> Resource update during NM graceful decommission
> ---
>
> Key: YARN-3223
> URL: https://issues.apache.org/jira/browse/YARN-3223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.7.1
>Reporter: Junping Du
>Assignee: Brook Zhou
> Attachments: YARN-3223-v1.patch
>
>
> During NM graceful decommission, we should handle resource update properly, 
> include: make RMNode keep track of old resource for possible rollback, keep 
> available resource to 0 and used resource get updated when
> container finished.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3223) Resource update during NM graceful decommission

2015-09-11 Thread Brook Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741678#comment-14741678
 ] 

Brook Zhou commented on YARN-3223:
--

Applied YARN-3212-v5.1.patch first.

With the YARN-3223-v1.patch changes, test-patch results passed.

| Vote |   Subsystem |  Runtime   | Comment

|   0  |  pre-patch  |  42m 50s   | Pre-patch trunk compilation is 
|  | || healthy.
|  +1  |@author  |  0m 0s | The patch does not contain any 
|  | || @author tags.
|  +1  | tests included  |  0m 0s | The patch appears to include 1 new 
|  | || or modified test files.
|  +1  |  javac  |  11m 12s   | There were no new javac warning 
|  | || messages.
|  +1  |javadoc  |  28m 15s   | There were no new javadoc warning 
|  | || messages.
|  +1  |  release audit  |  0m 59s| The applied patch does not increase 
|  | || the total number of release audit
|  | || warnings.
|  +1  | checkstyle  |  4m 35s| There were no new checkstyle 
|  | || issues.
|  +1  | whitespace  |  0m 0s | The patch has no lines that end in 
|  | || whitespace.
|  +1  |install  |  4m 9s | mvn install still works. 
|  +1  |eclipse:eclipse  |  1m 29s| The patch built with 
|  | || eclipse:eclipse.
|  +1  |   findbugs  |  7m 27s| The patch does not introduce any 
|  | || new Findbugs (version 3.0.0)
|  | || warnings.
|  | |  100m 58s  | 


> Resource update during NM graceful decommission
> ---
>
> Key: YARN-3223
> URL: https://issues.apache.org/jira/browse/YARN-3223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.7.1
>Reporter: Junping Du
>Assignee: Brook Zhou
> Attachments: YARN-3223-v1.patch
>
>
> During NM graceful decommission, we should handle resource update properly, 
> include: make RMNode keep track of old resource for possible rollback, keep 
> available resource to 0 and used resource get updated when
> container finished.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3223) Resource update during NM graceful decommission

2015-09-11 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741710#comment-14741710
 ] 

Junping Du commented on YARN-3223:
--

Thanks [~brookz] for updating the patch! I will review your patch after I 
figuring out YARN-3212 that is your patch depends on. I think we are getting 
closed on that patch.

> Resource update during NM graceful decommission
> ---
>
> Key: YARN-3223
> URL: https://issues.apache.org/jira/browse/YARN-3223
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: 2.7.1
>Reporter: Junping Du
>Assignee: Brook Zhou
> Attachments: YARN-3223-v1.patch
>
>
> During NM graceful decommission, we should handle resource update properly, 
> include: make RMNode keep track of old resource for possible rollback, keep 
> available resource to 0 and used resource get updated when
> container finished.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4136) LinuxContainerExecutor loses info when forwarding ResourceHandlerException

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740617#comment-14740617
 ] 

Hudson commented on YARN-4136:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #371 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/371/])
YARN-4136. LinuxContainerExecutor loses info when forwarding 
ResourceHandlerException. Contributed by Bibin A Chundatt. (vvasudev: rev 
486d5cb803efec7b4db445ee65a3df83392940a3)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* hadoop-yarn-project/CHANGES.txt


> LinuxContainerExecutor loses info when forwarding ResourceHandlerException
> --
>
> Key: YARN-4136
> URL: https://issues.apache.org/jira/browse/YARN-4136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4136.patch
>
>
> The Linux container executor {{launchContainer}} method throws 
> {{ResourceHandlerException}} when there are problems setting up the container 
> -but these aren't propagated in the raised IOE. They should be nested with 
> the string value included in the message text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2015-09-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740832#comment-14740832
 ] 

Jian He commented on YARN-4126:
---

You can try using below changes ?

{code}
-  UserGroupInformation.createRemoteUser("other");
+  UserGroupInformation.createRemoteUser("other", AuthMethod.KERBEROS);
{code}

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740831#comment-14740831
 ] 

Junping Du commented on YARN-313:
-

+1. Latest patch LGTM. Given I am also the co-author of the patch, we may need 
another +1 from other committers.

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v2.patch, YARN-313-v3.patch, YARN-313-v4.patch, 
> YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, YARN-313-v8.patch, 
> YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2015-09-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740832#comment-14740832
 ] 

Jian He edited comment on YARN-4126 at 9/11/15 1:48 PM:


You can try using below changes ?

{code}
-  UserGroupInformation.createRemoteUser("owner");
+  UserGroupInformation.createRemoteUser("owner", AuthMethod.KERBEROS);
   private static final UserGroupInformation other =
-  UserGroupInformation.createRemoteUser("other");
+  UserGroupInformation.createRemoteUser("other", AuthMethod.KERBEROS);
   private static final UserGroupInformation tester =
-  UserGroupInformation.createRemoteUser("tester");
+  UserGroupInformation.createRemoteUser("tester", AuthMethod.KERBEROS);
{code}


was (Author: jianhe):
You can try using below changes ?

{code}
-  UserGroupInformation.createRemoteUser("other");
+  UserGroupInformation.createRemoteUser("other", AuthMethod.KERBEROS);
{code}

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4148) When killing app, RM releases app's resource before they are released by NM

2015-09-11 Thread Jun Gong (JIRA)
Jun Gong created YARN-4148:
--

 Summary: When killing app, RM releases app's resource before they 
are released by NM
 Key: YARN-4148
 URL: https://issues.apache.org/jira/browse/YARN-4148
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Jun Gong
Assignee: Jun Gong


When killing a app, RM scheduler releases app's resource as soon as possible, 
then it might allocate these resource for new requests. But NM have not 
released them at that time.

The problem was found when we supported GPU as a resource(YARN-4122).  Test 
environment: a NM had 6 GPUs, app A used all 6 GPUs, app B was requesting 3 
GPUs. Killed app A, then RM released A's 6 GPUs, and allocated 3 GPUs to B. But 
when B tried to start container on NM, NM found it didn't have 3 GPUs to 
allocate because it had not released A's GPUs.

I think the problem also exists for CPU/Memory. It might cause OOM when memory 
is overused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3273) Improve web UI to facilitate scheduling analysis and debugging

2015-09-11 Thread Anubhav Dhoot (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Dhoot updated YARN-3273:

Assignee: Rohith Sharma K S  (was: Anubhav Dhoot)

> Improve web UI to facilitate scheduling analysis and debugging
> --
>
> Key: YARN-3273
> URL: https://issues.apache.org/jira/browse/YARN-3273
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jian He
>Assignee: Rohith Sharma K S
> Fix For: 2.7.0
>
> Attachments: 0001-YARN-3273-v1.patch, 0001-YARN-3273-v2.patch, 
> 0002-YARN-3273.patch, 0003-YARN-3273.patch, 0003-YARN-3273.patch, 
> 0004-YARN-3273.patch, YARN-3273-am-resource-used-AND-User-limit-v2.PNG, 
> YARN-3273-am-resource-used-AND-User-limit.PNG, 
> YARN-3273-application-headroom-v2.PNG, YARN-3273-application-headroom.PNG
>
>
> Job may be stuck for reasons such as:
> - hitting queue capacity 
> - hitting user-limit, 
> - hitting AM-resource-percentage 
> The  first queueCapacity is already shown on the UI.
> We may surface things like:
> - what is user's current usage and user-limit; 
> - what is the AM resource usage and limit;
> - what is the application's current HeadRoom;
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4142) add a way for an attempt to report an attempt failure

2015-09-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741910#comment-14741910
 ] 

Sunil G commented on YARN-4142:
---

Thanks for clarifying [~jlowe] [~steve_l]

If we are adding a diagnostics information in {{AllocateRequest}}, RM can fetch 
any diagnostics which AM wants to report, to RMAppAttempt level. So its also 
possible that different types of diagnostics can be sent by AM if needed (may 
be over various heartbeats before its send unRegister). With this RM must be 
able to store it and associate it with attempt in order of importance if 
possible. 
If AM ensures that all such diagnostics are sent before it calls  
{{finishApplicationMaster}}, I feel this my be a good option.

> add a way for an attempt to report an attempt failure
> -
>
> Key: YARN-4142
> URL: https://issues.apache.org/jira/browse/YARN-4142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Currently AMs can report a failure with exit code and diagnostics text —but 
> only when exiting to a failed state. If the AM terminates for any other 
> reason there's no information held in the RM, just the logs somewhere —and we 
> know they don't always last.
> When an application explicitly terminates an attempt, it would be nice if it 
> could  optionally report something to the RM before it exited. The most 
> recent set of these could then be included in Application Reports, so 
> allowing client apps to count attempt failures and get exit details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4136) LinuxContainerExecutor loses info when forwarding ResourceHandlerException

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740526#comment-14740526
 ] 

Hudson commented on YARN-4136:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1109 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1109/])
YARN-4136. LinuxContainerExecutor loses info when forwarding 
ResourceHandlerException. Contributed by Bibin A Chundatt. (vvasudev: rev 
486d5cb803efec7b4db445ee65a3df83392940a3)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* hadoop-yarn-project/CHANGES.txt


> LinuxContainerExecutor loses info when forwarding ResourceHandlerException
> --
>
> Key: YARN-4136
> URL: https://issues.apache.org/jira/browse/YARN-4136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4136.patch
>
>
> The Linux container executor {{launchContainer}} method throws 
> {{ResourceHandlerException}} when there are problems setting up the container 
> -but these aren't propagated in the raised IOE. They should be nested with 
> the string value included in the message text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4136) LinuxContainerExecutor loses info when forwarding ResourceHandlerException

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740652#comment-14740652
 ] 

Hudson commented on YARN-4136:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2296 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2296/])
YARN-4136. LinuxContainerExecutor loses info when forwarding 
ResourceHandlerException. Contributed by Bibin A Chundatt. (vvasudev: rev 
486d5cb803efec7b4db445ee65a3df83392940a3)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java


> LinuxContainerExecutor loses info when forwarding ResourceHandlerException
> --
>
> Key: YARN-4136
> URL: https://issues.apache.org/jira/browse/YARN-4136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4136.patch
>
>
> The Linux container executor {{launchContainer}} method throws 
> {{ResourceHandlerException}} when there are problems setting up the container 
> -but these aren't propagated in the raised IOE. They should be nested with 
> the string value included in the message text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4111) Killed application diagnostics message should be set rather having static mesage

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740587#comment-14740587
 ] 

Hadoop QA commented on YARN-4111:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 47s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   8m  2s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 51s | The applied patch generated  2 
new checkstyle issues (total was 299, now 299). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |  54m 13s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | |  93m 58s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755369/YARN-4111_4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 486d5cb |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/9088/artifact/patchprocess/diffcheckstylehadoop-yarn-server-resourcemanager.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9088/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9088/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9088/console |


This message was automatically generated.

> Killed application diagnostics message should be set rather having static 
> mesage
> 
>
> Key: YARN-4111
> URL: https://issues.apache.org/jira/browse/YARN-4111
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: nijel
> Attachments: YARN-4111_1.patch, YARN-4111_2.patch, YARN-4111_3.patch, 
> YARN-4111_4.patch
>
>
> Application can be killed either by *user via ClientRMService* OR *from 
> scheduler*. Currently diagnostic message is set statically i.e {{Application 
> killed by user.}} neverthless of application killed by scheduler. This brings 
> the confusion to the user after application is Killed that he did not kill 
> application at all but diagnostic message depicts that 'application is killed 
> by user'.
> It would be useful if the diagnostic message are different for each cause of 
> KILL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741845#comment-14741845
 ] 

Hudson commented on YARN-4145:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #360 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/360/])
YARN-4145. Make RMHATestBase abstract so its not run when running all tests 
under that namespace (adhoot via rkanter) (rkanter: rev 
ea4bb2749f966a5eaf712d1dbb2c845df0f5ca67)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java


> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741844#comment-14741844
 ] 

Hudson commented on YARN-4115:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #360 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/360/])
YARN-4115. Reduce loglevel of ContainerManagementProtocolProxy to Debug (adhoot 
via rkanter) (rkanter: rev b84fb41bb6ca2d69153cf5bd61f88492538ee713)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* hadoop-yarn-project/CHANGES.txt


> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4151) findbugs errors in hadoop-yarn-server-common module

2015-09-11 Thread MENG DING (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MENG DING updated YARN-4151:

Attachment: findbugs.xml

Attaching the findbugs report.

> findbugs errors in hadoop-yarn-server-common module
> ---
>
> Key: YARN-4151
> URL: https://issues.apache.org/jira/browse/YARN-4151
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: MENG DING
>Assignee: MENG DING
> Attachments: findbugs.xml
>
>
> 7 findbugs warnings are found in hadoop-yarn-server-common module which needs 
> to be fixed:
> {code}
> [INFO]
> [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
> hadoop-yarn-server-common ---
> [INFO] Fork Value is true
>  [java] Warnings generated: 7
> [INFO] Done FindBugs Analysis
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741789#comment-14741789
 ] 

Hudson commented on YARN-4145:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2299 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2299/])
YARN-4145. Make RMHATestBase abstract so its not run when running all tests 
under that namespace (adhoot via rkanter) (rkanter: rev 
ea4bb2749f966a5eaf712d1dbb2c845df0f5ca67)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java
* hadoop-yarn-project/CHANGES.txt


> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741788#comment-14741788
 ] 

Hudson commented on YARN-4115:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2299 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2299/])
YARN-4115. Reduce loglevel of ContainerManagementProtocolProxy to Debug (adhoot 
via rkanter) (rkanter: rev b84fb41bb6ca2d69153cf5bd61f88492538ee713)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* hadoop-yarn-project/CHANGES.txt


> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3942) Timeline store to read events from HDFS

2015-09-11 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741848#comment-14741848
 ] 

Greg Senia commented on YARN-3942:
--

I have placed this patch and the Tez patch into our test environments as we 
actively watched ATS crash many times over the past few weeks as we run about 
50k worth of tez apps/jobs a day.

I will provide some feed back in the next few days

> Timeline store to read events from HDFS
> ---
>
> Key: YARN-3942
> URL: https://issues.apache.org/jira/browse/YARN-3942
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-3942.001.patch
>
>
> This adds a new timeline store plugin that is intended as a stop-gap measure 
> to mitigate some of the issues we've seen with ATS v1 while waiting for ATS 
> v2.  The intent of this plugin is to provide a workable solution for running 
> the Tez UI against the timeline server on a large-scale clusters running many 
> thousands of jobs per day.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-313:
-
Attachment: (was: YARN-313-v11.patch)

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v11.patch, YARN-313-v2.patch, YARN-313-v3.patch, 
> YARN-313-v4.patch, YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, 
> YARN-313-v8.patch, YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-313:
-
Attachment: YARN-313-v11.patch

I forgot to add the RefreshNodesResources* files.

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v11.patch, YARN-313-v2.patch, YARN-313-v3.patch, 
> YARN-313-v4.patch, YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, 
> YARN-313-v8.patch, YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4151) findbugs errors in hadoop-yarn-server-common module

2015-09-11 Thread MENG DING (JIRA)
MENG DING created YARN-4151:
---

 Summary: findbugs errors in hadoop-yarn-server-common module
 Key: YARN-4151
 URL: https://issues.apache.org/jira/browse/YARN-4151
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: MENG DING
Assignee: MENG DING


7 findbugs warnings are found in hadoop-yarn-server-common module which needs 
to be fixed:

{code}
[INFO]
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-yarn-server-common ---
[INFO] Fork Value is true
 [java] Warnings generated: 7
[INFO] Done FindBugs Analysis
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3717:

Attachment: YARN-3717.20150912-1.patch

Thanks [~wangda], for kicking in jenkins. Apart from 
{{TestRMWebAppFairScheduler.testFairSchedulerWebAppPageInInconsistentState}} 
other test failures were related not related to the modifications in the patch 
(looked like some ATS server was running due to which client testcases were 
failing). Attaching a patch to correct it

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch, YARN-3717.20150912-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4145) Make RMHATestBase abstract so its not run when running all tests under that namespace

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741799#comment-14741799
 ] 

Hudson commented on YARN-4145:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2322 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2322/])
YARN-4145. Make RMHATestBase abstract so its not run when running all tests 
under that namespace (adhoot via rkanter) (rkanter: rev 
ea4bb2749f966a5eaf712d1dbb2c845df0f5ca67)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java
* hadoop-yarn-project/CHANGES.txt


> Make RMHATestBase abstract so its not run when running all tests under that 
> namespace
> -
>
> Key: YARN-4145
> URL: https://issues.apache.org/jira/browse/YARN-4145
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4145.001.patch
>
>
> Make it abstract to avoid running it as a test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4115) Reduce loglevel of ContainerManagementProtocolProxy to Debug

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741798#comment-14741798
 ] 

Hudson commented on YARN-4115:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2322 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2322/])
YARN-4115. Reduce loglevel of ContainerManagementProtocolProxy to Debug (adhoot 
via rkanter) (rkanter: rev b84fb41bb6ca2d69153cf5bd61f88492538ee713)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/ContainerManagementProtocolProxy.java
* hadoop-yarn-project/CHANGES.txt


> Reduce loglevel of ContainerManagementProtocolProxy to Debug
> 
>
> Key: YARN-4115
> URL: https://issues.apache.org/jira/browse/YARN-4115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-4115.001.patch
>
>
> We see log spams of Aug 28, 1:57:52.441 PMINFO
> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy 
> Opening proxy : :8041



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741888#comment-14741888
 ] 

Hadoop QA commented on YARN-313:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 40s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 5 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  7s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m  9s | The applied patch generated  1 
new checkstyle issues (total was 230, now 230). |
| {color:green}+1{color} | whitespace |   0m  8s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   5m 30s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |   0m 23s | Tests passed in 
hadoop-yarn-api. |
| {color:green}+1{color} | yarn tests |   6m 58s | Tests passed in 
hadoop-yarn-client. |
| {color:green}+1{color} | yarn tests |   1m 58s | Tests passed in 
hadoop-yarn-common. |
| {color:red}-1{color} | yarn tests |  47m 11s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | | 106m 10s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps |
|   | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRMRPCResponseId |
|   | hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
| Timed out tests | 
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
|
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755528/YARN-313-v11.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 9538af0 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-YARN-Build/9098/artifact/patchprocess/diffcheckstylehadoop-yarn-api.txt
 |
| hadoop-yarn-api test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9098/artifact/patchprocess/testrun_hadoop-yarn-api.txt
 |
| hadoop-yarn-client test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9098/artifact/patchprocess/testrun_hadoop-yarn-client.txt
 |
| hadoop-yarn-common test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9098/artifact/patchprocess/testrun_hadoop-yarn-common.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/9098/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/9098/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/9098/console |


This message was automatically generated.

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v11.patch, YARN-313-v2.patch, YARN-313-v3.patch, 
> YARN-313-v4.patch, YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, 
> YARN-313-v8.patch, YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740717#comment-14740717
 ] 

Jian He edited comment on YARN-1651 at 9/11/15 12:43 PM:
-

few more comments:
- schedulerNode#increaseContainer is not invoked when increasing regular 
container ? add a test?
- remove ContainersAndNMTokensAllocation in SchedulerApplicationAttempt
- FiCaSchedulerApp#unreserve -> unreserveIncreasedContainer to avoid name 
confliction.
{code}
  public boolean unreserve(Priority priority,
  FiCaSchedulerNode node, RMContainer rmContainer) {
  {code}
 - revert RMWebServices, AssignmentInformation changes
 - MockAM#allocateChangeContainerRequests -> resizeContainers
- very long line in FairScheduler and FifoScheduler
{code}  List blacklistAdditions, List blacklistRemovals, 
List increaseRequests, 
List decreaseRequests) {
{code}


was (Author: jianhe):
few more comments:
- schedulerNode#increaseContainer is not invoked when increasing regular 
container ? add a test?
- remove ContainersAndNMTokensAllocation in SchedulerApplicationAttempt
- FiCaSchedulerApp#unreserve -> unreserveIncreasedContainer to avoid name 
confliction.
{code}
  public boolean unreserve(Priority priority,
  FiCaSchedulerNode node, RMContainer rmContainer) {
  {code}
 - revert RMWebServices, AssignmentInformation changes
 - MockAM#allocateChangeContainerRequests -> resizeContainers

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1651) CapacityScheduler side changes to support increase/decrease container resource.

2015-09-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740717#comment-14740717
 ] 

Jian He commented on YARN-1651:
---

few more comments:
- schedulerNode#increaseContainer is not invoked when increasing regular 
container ? add a test?
- remove ContainersAndNMTokensAllocation in SchedulerApplicationAttempt
- FiCaSchedulerApp#unreserve -> unreserveIncreasedContainer to avoid name 
confliction.
{code}
  public boolean unreserve(Priority priority,
  FiCaSchedulerNode node, RMContainer rmContainer) {
  {code}
 - revert RMWebServices, AssignmentInformation changes
 - MockAM#allocateChangeContainerRequests -> resizeContainers

> CapacityScheduler side changes to support increase/decrease container 
> resource.
> ---
>
> Key: YARN-1651
> URL: https://issues.apache.org/jira/browse/YARN-1651
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-1651-1.YARN-1197.patch, 
> YARN-1651-2.YARN-1197.patch, YARN-1651-3.YARN-1197.patch, 
> YARN-1651-4.YARN-1197.patch, YARN-1651-5.YARN-1197.patch, 
> YARN-1651-6.YARN-1197.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4136) LinuxContainerExecutor loses info when forwarding ResourceHandlerException

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740724#comment-14740724
 ] 

Hudson commented on YARN-4136:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #357 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/357/])
YARN-4136. LinuxContainerExecutor loses info when forwarding 
ResourceHandlerException. Contributed by Bibin A Chundatt. (vvasudev: rev 
486d5cb803efec7b4db445ee65a3df83392940a3)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java


> LinuxContainerExecutor loses info when forwarding ResourceHandlerException
> --
>
> Key: YARN-4136
> URL: https://issues.apache.org/jira/browse/YARN-4136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4136.patch
>
>
> The Linux container executor {{launchContainer}} method throws 
> {{ResourceHandlerException}} when there are problems setting up the container 
> -but these aren't propagated in the raised IOE. They should be nested with 
> the string value included in the message text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740889#comment-14740889
 ] 

Jian He commented on YARN-313:
--

looks good overall, few comments,
we may rename refreshResources to RefreshNodesResources to be more clear ? this 
About the config format, how about a more concise way such as below, this can 
be tracked separately if it makes sense.
{code}

1 
1024 

{code}

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v2.patch, YARN-313-v3.patch, YARN-313-v4.patch, 
> YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, YARN-313-v8.patch, 
> YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2015-09-11 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740903#comment-14740903
 ] 

Bibin A Chundatt commented on YARN-4126:


Hi [~jianhe]

Thanks for the suggestion

Tested with your suggestion too still its failing. In 
{{UserGroupConfiguration}} will be loaded with non secure mode conf. Tried 
setting in BeforeClass too. Then all other nonsecure will try to load keytab 
file will fail.
Can refactor the testclass by spliting token related testcases to different 
test class.That should be fine rt?

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740948#comment-14740948
 ] 

Jian He commented on YARN-313:
--

bq. I will file a separated JIRA to address this optimization work. Is this OK?
sounds good

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v2.patch, YARN-313-v3.patch, YARN-313-v4.patch, 
> YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, YARN-313-v8.patch, 
> YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740894#comment-14740894
 ] 

Junping Du commented on YARN-313:
-

Thanks [~jianhe] for the review! 
bq. we may rename refreshResources to RefreshNodesResources to be more clear?
+1. That sounds better.

bq. About the config format, how about a more concise way such as below, this 
can be tracked separately if it makes sense. 
Sounds good. Given this jira is pending for a very long time, I will file a 
separated JIRA to address this optimization work. Is this OK?

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v2.patch, YARN-313-v3.patch, YARN-313-v4.patch, 
> YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, YARN-313-v8.patch, 
> YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-313) Add Admin API for supporting node resource configuration in command line

2015-09-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740889#comment-14740889
 ] 

Jian He edited comment on YARN-313 at 9/11/15 2:20 PM:
---

looks good overall, few comments,
- we may rename refreshResources to RefreshNodesResources to be more clear ? 
- About the config format, how about a more concise way such as below, this can 
be tracked separately if it makes sense.
{code}

1 
1024 

{code}


was (Author: jianhe):
looks good overall, few comments,
we may rename refreshResources to RefreshNodesResources to be more clear ? this 
About the config format, how about a more concise way such as below, this can 
be tracked separately if it makes sense.
{code}

1 
1024 

{code}

> Add Admin API for supporting node resource configuration in command line
> 
>
> Key: YARN-313
> URL: https://issues.apache.org/jira/browse/YARN-313
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Junping Du
>Assignee: Inigo Goiri
>Priority: Critical
> Attachments: YARN-313-sample.patch, YARN-313-v1.patch, 
> YARN-313-v10.patch, YARN-313-v2.patch, YARN-313-v3.patch, YARN-313-v4.patch, 
> YARN-313-v5.patch, YARN-313-v6.patch, YARN-313-v7.patch, YARN-313-v8.patch, 
> YARN-313-v9.patch
>
>
> We should provide some admin interface, e.g. "yarn rmadmin -refreshResources" 
> to support changes of node's resource specified in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4142) add a way for an attempt to report an attempt failure

2015-09-11 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740926#comment-14740926
 ] 

Jason Lowe commented on YARN-4142:
--

The idea here is to allow applications to update their diagnostics without 
failing the entire application.  Currently the only way the app attempt can 
update its diagnostics is when it unregisters, and that necessarily means the 
app is completely done with no further attempts.  There either needs to be a 
way to update diagnostics via the allocate heartbeat or the ability for 
application attempts to unregister without terminating the overall application.

> add a way for an attempt to report an attempt failure
> -
>
> Key: YARN-4142
> URL: https://issues.apache.org/jira/browse/YARN-4142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Currently AMs can report a failure with exit code and diagnostics text —but 
> only when exiting to a failed state. If the AM terminates for any other 
> reason there's no information held in the RM, just the logs somewhere —and we 
> know they don't always last.
> When an application explicitly terminates an attempt, it would be nice if it 
> could  optionally report something to the RM before it exited. The most 
> recent set of these could then be included in Application Reports, so 
> allowing client apps to count attempt failures and get exit details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2015-09-11 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740970#comment-14740970
 ] 

Bibin A Chundatt commented on YARN-4126:


Have tried out separating out i find this the best way to do it

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4126) RM should not issue delegation tokens in unsecure mode

2015-09-11 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4126:
---
Attachment: 0006-YARN-4126.patch

Hi [~jianhe]

Attaching patch after refactoring testcases.Please do review
All testcases are passing fine.

> RM should not issue delegation tokens in unsecure mode
> --
>
> Key: YARN-4126
> URL: https://issues.apache.org/jira/browse/YARN-4126
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4126.patch, 0002-YARN-4126.patch, 
> 0003-YARN-4126.patch, 0004-YARN-4126.patch, 0005-YARN-4126.patch, 
> 0006-YARN-4126.patch
>
>
> ClientRMService#getDelegationToken is currently  returning a delegation token 
> in insecure mode. We should not return the token if it's in insecure mode. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3717) Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API

2015-09-11 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3717:

Attachment: YARN-3717.20150911-1.patch

Hi [~wangda] &[~gtCarrera9],
Thanks for the review uploading the patch with fixes for the comments and test 
case for the same.  For ATS V2  planning to raise a subjira (or if possible in 
one of the existing jiras) under YARN-2928.

> Expose app/am/queue's node-label-expression to RM web UI / CLI / REST-API
> -
>
> Key: YARN-3717
> URL: https://issues.apache.org/jira/browse/YARN-3717
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: 3717_cluster_test_snapshots.zip, RMLogsForHungJob.log, 
> YARN-3717.20150822-1.patch, YARN-3717.20150824-1.patch, 
> YARN-3717.20150825-1.patch, YARN-3717.20150826-1.patch, 
> YARN-3717.20150911-1.patch
>
>
> 1> Add the default-node-Label expression for each queue in scheduler page.
> 2> In Application/Appattempt page  show the app configured node label 
> expression for AM and Job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)