[jira] [Updated] (YARN-10216) Utility to dynamically reload Configuration on the disk

2020-06-25 Thread Cyrus Jackson (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cyrus Jackson updated YARN-10216:
-
Attachment: YARN-10216.001.patch

> Utility to dynamically reload Configuration on the disk
> ---
>
> Key: YARN-10216
> URL: https://issues.apache.org/jira/browse/YARN-10216
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Cyrus Jackson
>Assignee: Cyrus Jackson
>Priority: Major
> Attachments: YARN-10216.001.patch, image-2020-04-06-09-50-51-948.png
>
>
> There should be a way to dynamically reload the configuration properties from 
> the disk. The purpose of this feature is to let individual classes which are 
> interested in observing this configuration changes, to be notified when the 
> conf is reloaded from the disk. This is similar to how Hbase has done.
> *Class Diagram*
>   !image-2020-04-06-09-50-51-948.png!
>  
> *APPROACH DETAILS* 
> The approach is based on the adaption of Hbase Online Configuration. In this 
> case, the configuration file is monitored for any changes on the disk. If the 
> file has changed, the properties of Configuration is reloaded and all the 
> observers are notified. 
> The classes that implements the observers updates the necessary values if 
> required.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145981#comment-17145981
 ] 

Hadoop QA commented on YARN-9809:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
54s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 30s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 334 unchanged - 
0 fixed = 335 total (was 334) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 1231 unchanged - 3 fixed = 1231 total (was 1234) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
40s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 11 new + 89 unchanged - 11 fixed = 100 total (was 100) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
54s{color} | {color:green} hadoop-yarn-server-common in the patch 

[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145904#comment-17145904
 ] 

Hadoop QA commented on YARN-9809:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
25m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
21s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 25s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 334 unchanged - 
0 fixed = 335 total (was 334) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 1232 unchanged - 3 fixed = 1232 total (was 1235) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
54s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
27s{color} | 

[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-25 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145893#comment-17145893
 ] 

Eric Badger commented on YARN-9809:
---

Good catch, [~Jim_Brennan]. {{updateMetricsForRejoinedNode()}} is only called 
in one other place and I don't want to add the node and then remove it again. 
So I removed the increment from {{updateMetricsForRejoinedNode()}} and 
explicitly added it to just before the other place where 
{{updateMetricsForRejoinedNode()}} is called.

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch, YARN-9809.005.patch, 
> YARN-9809.006.patch, YARN-9809.007.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-25 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-9809:
--
Attachment: YARN-9809.007.patch

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch, YARN-9809.005.patch, 
> YARN-9809.006.patch, YARN-9809.007.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10251) Show extended resources on legacy RM UI.

2020-06-25 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145875#comment-17145875
 ] 

Jim Brennan commented on YARN-10251:


Thanks [~epayne] for the updated patch and the explanation.

I am +1 (non-binding) on patch 006 (assuming nothing new comes up from 
precommit build).

 

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Screen 
> Shot 2020-06-25 at 3.40.06 PM.png, Updated NodesPage UI With GPU columns.png, 
> Updated RM UI With All Resources Shown.png.png, YARN-10251.003.patch, 
> YARN-10251.004.patch, YARN-10251.005.patch, YARN-10251.006.patch, 
> YARN-10251.branch-2.10.001.patch, YARN-10251.branch-2.10.002.patch, 
> YARN-10251.branch-2.10.003.patch, YARN-10251.branch-2.10.005.patch, 
> YARN-10251.branch-3.2.004.patch, YARN-10251.branch-3.2.005.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-25 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145869#comment-17145869
 ] 

Jim Brennan commented on YARN-9809:
---

Thanks for the updates [~ebadger]!   I have one comment on the new patch:

RMNodeImpl
* I think there's a bug from moving the call to 
\{{ClusterMetrics.getMetrics().incrNumActiveNodes()}}.  If previousRMNode != 
null (in the first check), we call \{{rmNode.updateMetricsForRejoinedNode()}}, 
which decrements the counter for the previous state and increments num active 
nodes. With your change, we now increment active nodes again when we call 
reportNodeRunning.

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch, YARN-9809.005.patch, 
> YARN-9809.006.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10251) Show extended resources on legacy RM UI.

2020-06-25 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145864#comment-17145864
 ] 

Eric Payne commented on YARN-10251:
---

Thanks very much for the review, [~Jim_Brennan]. I attached patch 006 for 
trunk. I'll work on getting the backport patches up later.
{quote}AppInfo
 - (nit) move the initialization of usageReport before the if statement so you 
can use it throughout that condition. Makes the diff a little bigger, but I 
think it’s worth it in this case.{quote}
For the sake of sparing the reviewers, I don't generally like to make changes 
that refactor the code if they are not directly related to the bug fix/feature. 
But since you suggested it and are on board, I did so in this case.
{quote}NodesPage
 - The new entries for GPUs Used and GPUs Avail are still using ".vcores” for 
the set selector. Is this correct? Shouldn't we use ".gpus"?
{code:java}
.th(".vcores", "GPUs Used")
.th(".vcores", "GPUs Avail");
{code}
{quote}
OOPS! Good catch.
{quote}RmAppsBlock
 - I'm not sure I understand this logic:{quote}
In the AppsBlock on the RM UI, the Allocated and Reserved columns contain a 
number if the app is not completed and "N/A" if the app is completed.

!Screen Shot 2020-06-25 at 3.40.06 PM.png!

 

 
When {{AppInfo}} retrieves the {{ApplicationResourceUsageReport}}, this 
behavior is handle automatically because the memory and vcores field is -1 if 
the app is completed and has a number otherwise. However, the extended 
resources can be 0 if there are no resources of that type used, or it will be 0 
if the app has completed, not -1. I felt that it is less risky to put the 
additional check for completed app in RMAppsBlock rather than try to change the 
behavior in the retrieval of {{ApplicationResourceUsageReport}}.

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Screen 
> Shot 2020-06-25 at 3.40.06 PM.png, Updated NodesPage UI With GPU columns.png, 
> Updated RM UI With All Resources Shown.png.png, YARN-10251.003.patch, 
> YARN-10251.004.patch, YARN-10251.005.patch, YARN-10251.006.patch, 
> YARN-10251.branch-2.10.001.patch, YARN-10251.branch-2.10.002.patch, 
> YARN-10251.branch-2.10.003.patch, YARN-10251.branch-2.10.005.patch, 
> YARN-10251.branch-3.2.004.patch, YARN-10251.branch-3.2.005.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10251) Show extended resources on legacy RM UI.

2020-06-25 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-10251:
--
Attachment: Screen Shot 2020-06-25 at 3.40.06 PM.png

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Screen 
> Shot 2020-06-25 at 3.40.06 PM.png, Updated NodesPage UI With GPU columns.png, 
> Updated RM UI With All Resources Shown.png.png, YARN-10251.003.patch, 
> YARN-10251.004.patch, YARN-10251.005.patch, YARN-10251.006.patch, 
> YARN-10251.branch-2.10.001.patch, YARN-10251.branch-2.10.002.patch, 
> YARN-10251.branch-2.10.003.patch, YARN-10251.branch-2.10.005.patch, 
> YARN-10251.branch-3.2.004.patch, YARN-10251.branch-3.2.005.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10251) Show extended resources on legacy RM UI.

2020-06-25 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-10251:
--
Attachment: YARN-10251.006.patch

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
> NodesPage UI With GPU columns.png, Updated RM UI With All Resources 
> Shown.png.png, YARN-10251.003.patch, YARN-10251.004.patch, 
> YARN-10251.005.patch, YARN-10251.006.patch, YARN-10251.branch-2.10.001.patch, 
> YARN-10251.branch-2.10.002.patch, YARN-10251.branch-2.10.003.patch, 
> YARN-10251.branch-2.10.005.patch, YARN-10251.branch-3.2.004.patch, 
> YARN-10251.branch-3.2.005.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10278) CapacityScheduler test framework ProportionalCapacityPreemptionPolicyMockFramework need some review

2020-06-25 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145799#comment-17145799
 ] 

Eric Payne commented on YARN-10278:
---

[~snemeth], patch 002 looks fine to me. What are the target branches?

> CapacityScheduler test framework 
> ProportionalCapacityPreemptionPolicyMockFramework need some review
> ---
>
> Key: YARN-10278
> URL: https://issues.apache.org/jira/browse/YARN-10278
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-10278.001.patch, YARN-10278.002.patch
>
>
> This test framework class mocks a bit too heavily, and simulates CS internal 
> behaviour with the mock methods over a point it is reasonably maintainable, 
> any internal change in CS is a major headscratch.
> A lot of tests depend on this class, so we should approach it carefully, but 
> I think it's wroth to examine this class if it can be made a bit more 
> resilient to changes, and easier to maintain. Or at least document it better.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10251) Show extended resources on legacy RM UI.

2020-06-25 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145782#comment-17145782
 ] 

Jim Brennan commented on YARN-10251:


Thanks for the patch [~epayne]! Overall I think this looks good. I have a few 
comments:


 AppInfo
 - (nit) move the initialization of {{usageReport}} before the if statement so 
you can use it throughout that condition. Makes the diff a little bigger, but I 
think it’s worth it in this case.

NodesPage
 - The new entries for {{GPUs Used}} and {{GPUs Avail}} are still using 
{{".vcores”}} for the set selector. Is this correct? Shouldn't we use 
{{".gpus"}}?
{noformat}
.th(".vcores", "GPUs Used")
.th(".vcores", "GPUs Avail");
{noformat}

RmAppsBlock
 - I'm not sure I understand this logic:
{noformat}
.append((isAppInCompletedState && app.getAllocatedGpus() <= 0)
? "N/A" : String.valueOf(app.getAllocatedGpus()))
{noformat}
So if the app is not in a completed state we don't need to check for 
app.getAllocatedGpus() <= 0)?
 Should that check be {{(isAppInCompletedState || app.getAllocatedGpus() <= 
0)}} ?

 - also, should that be checking for {{< 0}} or {{== -1}} instead?
 - replace "N/A" with UNAVAILABLE, which is defined to the same thing.

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
> NodesPage UI With GPU columns.png, Updated RM UI With All Resources 
> Shown.png.png, YARN-10251.003.patch, YARN-10251.004.patch, 
> YARN-10251.005.patch, YARN-10251.branch-2.10.001.patch, 
> YARN-10251.branch-2.10.002.patch, YARN-10251.branch-2.10.003.patch, 
> YARN-10251.branch-2.10.005.patch, YARN-10251.branch-3.2.004.patch, 
> YARN-10251.branch-3.2.005.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10277) CapacityScheduler test TestUserGroupMappingPlacementRule should build proper hierarchy

2020-06-25 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145754#comment-17145754
 ] 

Peter Bacsko commented on YARN-10277:
-

Thanks [~snemeth] for the patch, +1.

> CapacityScheduler test TestUserGroupMappingPlacementRule should build proper 
> hierarchy
> --
>
> Key: YARN-10277
> URL: https://issues.apache.org/jira/browse/YARN-10277
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-10277.001.patch, YARN-10277.002.patch, 
> YARN-10277.003.patch
>
>
> Since the CapacityScheduler internal implementation depends more and more on 
> queue being hierarchical, the test gets really hard to maintain. A lot of 
> test cases were failing because they used non existing queues, but the older 
> placement rule solution ignored missing parents, but since the leaf queue 
> change in CS, we must be able to get a full path for any queue, since all 
> queues are referenced by their full path.
> This test should reflect this and instead of creating and expecting the 
> existance of fictional queues, it should create a proper queue hierarchy, 
> with a way to describe it better. 
> Currently we set up a bunch of mockito "when" statements to simulate the 
> queue behavior, but this is a hassle to maintain, and easy to miss a few 
> method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10277) CapacityScheduler test TestUserGroupMappingPlacementRule should build proper hierarchy

2020-06-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145720#comment-17145720
 ] 

Hadoop QA commented on YARN-10277:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
40s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26211/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10277 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006443/YARN-10277.003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 1c33960a8f12 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 6a8fd73b273 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| 

[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-25 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145717#comment-17145717
 ] 

Eric Badger commented on YARN-9809:
---

The TestFairScheduler and TestFairSchedulerPreemption test failures are 
unrelated to this JIRA as they have also been reported in 
https://issues.apache.org/jira/browse/YARN-10329

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch, YARN-9809.005.patch, 
> YARN-9809.006.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-25 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145711#comment-17145711
 ] 

Eric Badger commented on YARN-9809:
---

Patch 006 moves {{ClusterMetrics.getMetrics().incrNumActiveNodes();}} into 
{{reportNodeRunning}} inside of the addNodeTransition. This fixes the failing 
unit test and prevents a scenario where we add an unhealthy node as RUNNING and 
then quickly switching it to UNHEALTHY. This way we go straight to UNHEALTHY.

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch, YARN-9809.005.patch, 
> YARN-9809.006.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9809) NMs should supply a health status when registering with RM

2020-06-25 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-9809:
--
Attachment: YARN-9809.006.patch

> NMs should supply a health status when registering with RM
> --
>
> Key: YARN-9809
> URL: https://issues.apache.org/jira/browse/YARN-9809
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9809.001.patch, YARN-9809.002.patch, 
> YARN-9809.003.patch, YARN-9809.004.patch, YARN-9809.005.patch, 
> YARN-9809.006.patch
>
>
> Currently if the NM registers with the RM and it is unhealthy, it can be 
> scheduled many containers before the first heartbeat. After the first 
> heartbeat, the RM will mark the NM as unhealthy and kill all of the 
> containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9903) Support reservations continue looking for Node Labels

2020-06-25 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145096#comment-17145096
 ] 

Jim Brennan commented on YARN-9903:
---

Thanks [~epayne]!  I will put up a patch for branch-3.2 and check the earlier 
branches as well.


> Support reservations continue looking for Node Labels
> -
>
> Key: YARN-9903
> URL: https://issues.apache.org/jira/browse/YARN-9903
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Tarun Parimi
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-9903.001.patch, YARN-9903.002.patch, 
> YARN-9903.003.patch, YARN-9903.004.patch
>
>
> YARN-1769 brought in reservations continue looking feature which improves the 
> several resource reservation scenarios. However, it is not handled currently 
> when nodes have a label assigned to them. This is useful and in many cases 
> necessary even for Node Labels. So we should look to support this for node 
> labels also.
> For example, in AbstractCSQueue.java, we have the below TODO.
> {code:java}
> // TODO, now only consider reservation cases when the node has no label 
> if (this.reservationsContinueLooking && nodePartition.equals( 
> RMNodeLabelsManager.NO_LABEL) && Resources.greaterThan( resourceCalculator, 
> clusterResource, resourceCouldBeUnreserved, Resources.none())) {
> {code}
> cc [~sunilg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9903) Support reservations continue looking for Node Labels

2020-06-25 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145091#comment-17145091
 ] 

Eric Payne commented on YARN-9903:
--

[~Jim_Brennan], Thanks for the work on this JIRA and especially for the latest 
patch. The latest patch looks good, but we will need patches for branch-3.2 and 
earlier. YARN-9052 (MockRMAppSubmissionData.Builder) was only introduced in 
branch-3.3 and was not backported.

> Support reservations continue looking for Node Labels
> -
>
> Key: YARN-9903
> URL: https://issues.apache.org/jira/browse/YARN-9903
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Tarun Parimi
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-9903.001.patch, YARN-9903.002.patch, 
> YARN-9903.003.patch, YARN-9903.004.patch
>
>
> YARN-1769 brought in reservations continue looking feature which improves the 
> several resource reservation scenarios. However, it is not handled currently 
> when nodes have a label assigned to them. This is useful and in many cases 
> necessary even for Node Labels. So we should look to support this for node 
> labels also.
> For example, in AbstractCSQueue.java, we have the below TODO.
> {code:java}
> // TODO, now only consider reservation cases when the node has no label 
> if (this.reservationsContinueLooking && nodePartition.equals( 
> RMNodeLabelsManager.NO_LABEL) && Resources.greaterThan( resourceCalculator, 
> clusterResource, resourceCouldBeUnreserved, Resources.none())) {
> {code}
> cc [~sunilg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10279) Avoid unnecessary QueueMappingEntity creations

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145042#comment-17145042
 ] 

Hudson commented on YARN-10279:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18379 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18379/])
YARN-10279. Avoid unnecessary QueueMappingEntity creations. Contributed 
(snemeth: rev 6a8fd73b273629d0c7c071cf4d090f67d9b96fe4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/UserGroupMappingPlacementRule.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/QueuePlacementRuleUtils.java


> Avoid unnecessary QueueMappingEntity creations
> --
>
> Key: YARN-10279
> URL: https://issues.apache.org/jira/browse/YARN-10279
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Hudáky Márton Gyula
>Priority: Minor
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10279.001.patch, YARN-10279.003.patch, 
> YARN-10279.004.patch, YARN-10279.005.patch, YARN-10279.006.patch
>
>
> In CS UserGroupMappingPlacementRule and AppNameMappingPlacementRule classes 
> we create new instances of QueueMappingEntity class. In some cases we simply 
> copy the already received class, so we just duplicate it, which is 
> unnecessary since the class is immutable.
> This is just a minor improvement, probably doesn't have much impact, but 
> still puts some unnecessary load on GC.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10277) CapacityScheduler test TestUserGroupMappingPlacementRule should build proper hierarchy

2020-06-25 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145025#comment-17145025
 ] 

Szilard Nemeth commented on YARN-10277:
---

Thanks [~pbacsko] for reviewing.
Added new patch that fixes the checkstyle warnings.
Let's wait for a green jenkins build.

> CapacityScheduler test TestUserGroupMappingPlacementRule should build proper 
> hierarchy
> --
>
> Key: YARN-10277
> URL: https://issues.apache.org/jira/browse/YARN-10277
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-10277.001.patch, YARN-10277.002.patch, 
> YARN-10277.003.patch
>
>
> Since the CapacityScheduler internal implementation depends more and more on 
> queue being hierarchical, the test gets really hard to maintain. A lot of 
> test cases were failing because they used non existing queues, but the older 
> placement rule solution ignored missing parents, but since the leaf queue 
> change in CS, we must be able to get a full path for any queue, since all 
> queues are referenced by their full path.
> This test should reflect this and instead of creating and expecting the 
> existance of fictional queues, it should create a proper queue hierarchy, 
> with a way to describe it better. 
> Currently we set up a bunch of mockito "when" statements to simulate the 
> queue behavior, but this is a hassle to maintain, and easy to miss a few 
> method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10277) CapacityScheduler test TestUserGroupMappingPlacementRule should build proper hierarchy

2020-06-25 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10277:
--
Attachment: YARN-10277.003.patch

> CapacityScheduler test TestUserGroupMappingPlacementRule should build proper 
> hierarchy
> --
>
> Key: YARN-10277
> URL: https://issues.apache.org/jira/browse/YARN-10277
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-10277.001.patch, YARN-10277.002.patch, 
> YARN-10277.003.patch
>
>
> Since the CapacityScheduler internal implementation depends more and more on 
> queue being hierarchical, the test gets really hard to maintain. A lot of 
> test cases were failing because they used non existing queues, but the older 
> placement rule solution ignored missing parents, but since the leaf queue 
> change in CS, we must be able to get a full path for any queue, since all 
> queues are referenced by their full path.
> This test should reflect this and instead of creating and expecting the 
> existance of fictional queues, it should create a proper queue hierarchy, 
> with a way to describe it better. 
> Currently we set up a bunch of mockito "when" statements to simulate the 
> queue behavior, but this is a hassle to maintain, and easy to miss a few 
> method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10279) Avoid unnecessary QueueMappingEntity creations

2020-06-25 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10279:
--
Fix Version/s: 3.4.0

> Avoid unnecessary QueueMappingEntity creations
> --
>
> Key: YARN-10279
> URL: https://issues.apache.org/jira/browse/YARN-10279
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Hudáky Márton Gyula
>Priority: Minor
> Fix For: 3.4.0, 3.3.1
>
> Attachments: YARN-10279.001.patch, YARN-10279.003.patch, 
> YARN-10279.004.patch, YARN-10279.005.patch, YARN-10279.006.patch
>
>
> In CS UserGroupMappingPlacementRule and AppNameMappingPlacementRule classes 
> we create new instances of QueueMappingEntity class. In some cases we simply 
> copy the already received class, so we just duplicate it, which is 
> unnecessary since the class is immutable.
> This is just a minor improvement, probably doesn't have much impact, but 
> still puts some unnecessary load on GC.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10279) Avoid unnecessary QueueMappingEntity creations

2020-06-25 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10279:
--
Fix Version/s: 3.3.1

> Avoid unnecessary QueueMappingEntity creations
> --
>
> Key: YARN-10279
> URL: https://issues.apache.org/jira/browse/YARN-10279
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Hudáky Márton Gyula
>Priority: Minor
> Fix For: 3.3.1
>
> Attachments: YARN-10279.001.patch, YARN-10279.003.patch, 
> YARN-10279.004.patch, YARN-10279.005.patch, YARN-10279.006.patch
>
>
> In CS UserGroupMappingPlacementRule and AppNameMappingPlacementRule classes 
> we create new instances of QueueMappingEntity class. In some cases we simply 
> copy the already received class, so we just duplicate it, which is 
> unnecessary since the class is immutable.
> This is just a minor improvement, probably doesn't have much impact, but 
> still puts some unnecessary load on GC.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10279) Avoid unnecessary QueueMappingEntity creations

2020-06-25 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145021#comment-17145021
 ] 

Szilard Nemeth commented on YARN-10279:
---

Thanks [~mhudaky],
Latest patch LGTM, committed to trunk and branch-3.3
Thanks [~adam.antal] for the review.

Thanks [~mhudaky] for filing the FS flaky jira as well.
Resolving this jira.

> Avoid unnecessary QueueMappingEntity creations
> --
>
> Key: YARN-10279
> URL: https://issues.apache.org/jira/browse/YARN-10279
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Hudáky Márton Gyula
>Priority: Minor
> Attachments: YARN-10279.001.patch, YARN-10279.003.patch, 
> YARN-10279.004.patch, YARN-10279.005.patch, YARN-10279.006.patch
>
>
> In CS UserGroupMappingPlacementRule and AppNameMappingPlacementRule classes 
> we create new instances of QueueMappingEntity class. In some cases we simply 
> copy the already received class, so we just duplicate it, which is 
> unnecessary since the class is immutable.
> This is just a minor improvement, probably doesn't have much impact, but 
> still puts some unnecessary load on GC.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145010#comment-17145010
 ] 

Andras Gyori commented on YARN-10327:
-

Thank you for working on this issue [~mhudaky]. The patch looks good to me. 
Following the same logic, there are invalidOpts as well, that could belong to 
testLogsCLIWithInvalidArgs method. However, for this issue, I think it is 
enough.

> Remove duplication of checking for invalid application ID in TestLogsCLI
> 
>
> Key: YARN-10327
> URL: https://issues.apache.org/jira/browse/YARN-10327
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Hudáky Márton Gyula
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
> Attachments: YARN-10327.001.patch
>
>
> TestLogsCLI has a separate function to test for invalid application ID 
> (#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
> multiple invalid arguments (including application ID). One of them should be 
> eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17145002#comment-17145002
 ] 

Hadoop QA commented on YARN-10327:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
48s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: 
The patch generated 0 new + 121 unchanged - 4 fixed = 121 total (was 125) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 
26s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26210/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10327 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006428/YARN-10327.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux ea463f5f4e3d 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4b5b54c73f2 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/26210/testReport/ |
| Max. process+thread count | 553 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Comment Edited] (YARN-10251) Show extended resources on legacy RM UI.

2020-06-25 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144128#comment-17144128
 ] 

Eric Payne edited comment on YARN-10251 at 6/25/20, 3:07 PM:
-

[~jhung], thanks for following this JIRA. I wonder if you would have time for a 
review.

[~jeagles], since you have some experience in this area, I wonder if you would 
also have time for a review.


was (Author: eepayne):
[~jhung], thanks for following this JIRA. I wonder if you would have time for a 
review.

[~jeaglesham], since you have some experience in this area, I wonder if you 
would also have time for a review.

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
> NodesPage UI With GPU columns.png, Updated RM UI With All Resources 
> Shown.png.png, YARN-10251.003.patch, YARN-10251.004.patch, 
> YARN-10251.005.patch, YARN-10251.branch-2.10.001.patch, 
> YARN-10251.branch-2.10.002.patch, YARN-10251.branch-2.10.003.patch, 
> YARN-10251.branch-2.10.005.patch, YARN-10251.branch-3.2.004.patch, 
> YARN-10251.branch-3.2.005.patch
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10328) Too many ZK Curator NodeExists exception logs in YARN Service AM logs

2020-06-25 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-10328:
-
Summary: Too many ZK Curator NodeExists exception logs in YARN Service AM 
logs  (was: Too many ZK Curator NodeExists logs in YARN Service AM logs)

> Too many ZK Curator NodeExists exception logs in YARN Service AM logs
> -
>
> Key: YARN-10328
> URL: https://issues.apache.org/jira/browse/YARN-10328
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> Following debug logs are printed everytime when component is started.
> {code:java}
> [pool-6-thread-3] DEBUG zk.CuratorService - path already present: 
> /registry/users/server/services/yarn-service/default-worker/components
> org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = 
> NodeExists for 
> /registry/users/hetuserver/services/yarn-service/default-worker/components
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1480)
>   at 
> org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:740)
>   at 
> org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:723)
>   at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:109)
>   at 
> org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:720)
>   at 
> org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:484)
>   at 
> org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:474)
>   at 
> org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:454)
>   at 
> org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44)
>   at 
> org.apache.hadoop.registry.client.impl.zk.CuratorService.zkMkPath(CuratorService.java:587)
>   at 
> org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.mknode(RegistryOperationsService.java:99)
>   at 
> org.apache.hadoop.yarn.service.registry.YarnRegistryViewForProviders.putComponent(YarnRegistryViewForProviders.java:146)
>   at 
> org.apache.hadoop.yarn.service.registry.YarnRegistryViewForProviders.putComponent(YarnRegistryViewForProviders.java:128)
>   at 
> org.apache.hadoop.yarn.service.component.instance.ComponentInstance.updateServiceRecord(ComponentInstance.java:511)
>   at 
> org.apache.hadoop.yarn.service.component.instance.ComponentInstance.updateContainerStatus(ComponentInstance.java:449)
>   at 
> org.apache.hadoop.yarn.service.component.instance.ComponentInstance$ContainerStatusRetriever.run(ComponentInstance.java:620)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10329) Flaky test cases in Fair Scheduler

2020-06-25 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hudáky Márton Gyula updated YARN-10329:
---
Description: 
The following 2 test cases are failing on unrelated patches very often:

hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption

Here is an example of both failures
{code:java}
[ERROR] Tests run: 105, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
27.481 s <<< FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
[ERROR] 
testNormalizationUsingQueueMaximumAllocation(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler)
  Time elapsed: 0.178 s  <<< ERROR!
org.apache.hadoop.metrics2.MetricsException: Metrics source 
PartitionQueueMetrics,partition= already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.getPartitionMetrics(QueueMetrics.java:360)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.incrPendingResources(QueueMetrics.java:599)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updatePendingResources(AppSchedulingInfo.java:399)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:331)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:358)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateResourceRequests(AppSchedulingInfo.java:194)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateResourceRequests(SchedulerApplicationAttempt.java:462)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.allocate(FairScheduler.java:931)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.allocateAppAttempt(TestFairScheduler.java:435)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.testNormalizationUsingQueueMaximumAllocation(TestFairScheduler.java:409)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 

[jira] [Commented] (YARN-10279) Avoid unnecessary QueueMappingEntity creations

2020-06-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144956#comment-17144956
 ] 

Hudáky Márton Gyula commented on YARN-10279:


Filed YARN-10329 for the FS flakies.

> Avoid unnecessary QueueMappingEntity creations
> --
>
> Key: YARN-10279
> URL: https://issues.apache.org/jira/browse/YARN-10279
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Gergely Pollak
>Assignee: Hudáky Márton Gyula
>Priority: Minor
> Attachments: YARN-10279.001.patch, YARN-10279.003.patch, 
> YARN-10279.004.patch, YARN-10279.005.patch, YARN-10279.006.patch
>
>
> In CS UserGroupMappingPlacementRule and AppNameMappingPlacementRule classes 
> we create new instances of QueueMappingEntity class. In some cases we simply 
> copy the already received class, so we just duplicate it, which is 
> unnecessary since the class is immutable.
> This is just a minor improvement, probably doesn't have much impact, but 
> still puts some unnecessary load on GC.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10329) Flaky test cases in Fair Scheduler

2020-06-25 Thread Jira
Hudáky Márton Gyula created YARN-10329:
--

 Summary: Flaky test cases in Fair Scheduler
 Key: YARN-10329
 URL: https://issues.apache.org/jira/browse/YARN-10329
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Hudáky Márton Gyula


The following 2 test cases are failing on unrelated patches very often:

hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler

hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption

Here is an example of failures
{code:java}
[ERROR] Tests run: 105, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
27.481 s <<< FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
[ERROR] 
testNormalizationUsingQueueMaximumAllocation(org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler)
  Time elapsed: 0.178 s  <<< ERROR!
org.apache.hadoop.metrics2.MetricsException: Metrics source 
PartitionQueueMetrics,partition= already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.getPartitionMetrics(QueueMetrics.java:360)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.incrPendingResources(QueueMetrics.java:599)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updatePendingResources(AppSchedulingInfo.java:399)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:331)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.internalAddResourceRequests(AppSchedulingInfo.java:358)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.updateResourceRequests(AppSchedulingInfo.java:194)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.updateResourceRequests(SchedulerApplicationAttempt.java:462)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.allocate(FairScheduler.java:931)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.allocateAppAttempt(TestFairScheduler.java:435)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.testNormalizationUsingQueueMaximumAllocation(TestFairScheduler.java:409)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 

[jira] [Created] (YARN-10328) Too many ZK Curator NodeExists logs in YARN Service AM logs

2020-06-25 Thread Bilwa S T (Jira)
Bilwa S T created YARN-10328:


 Summary: Too many ZK Curator NodeExists logs in YARN Service AM 
logs
 Key: YARN-10328
 URL: https://issues.apache.org/jira/browse/YARN-10328
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bilwa S T
Assignee: Bilwa S T


Following debug logs are printed everytime when component is started.
{code:java}
[pool-6-thread-3] DEBUG zk.CuratorService - path already present: 
/registry/users/server/services/yarn-service/default-worker/components
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = 
NodeExists for 
/registry/users/hetuserver/services/yarn-service/default-worker/components
at org.apache.zookeeper.KeeperException.create(KeeperException.java:128)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1480)
at 
org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:740)
at 
org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:723)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:109)
at 
org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:720)
at 
org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:484)
at 
org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:474)
at 
org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:454)
at 
org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44)
at 
org.apache.hadoop.registry.client.impl.zk.CuratorService.zkMkPath(CuratorService.java:587)
at 
org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.mknode(RegistryOperationsService.java:99)
at 
org.apache.hadoop.yarn.service.registry.YarnRegistryViewForProviders.putComponent(YarnRegistryViewForProviders.java:146)
at 
org.apache.hadoop.yarn.service.registry.YarnRegistryViewForProviders.putComponent(YarnRegistryViewForProviders.java:128)
at 
org.apache.hadoop.yarn.service.component.instance.ComponentInstance.updateServiceRecord(ComponentInstance.java:511)
at 
org.apache.hadoop.yarn.service.component.instance.ComponentInstance.updateContainerStatus(ComponentInstance.java:449)
at 
org.apache.hadoop.yarn.service.component.instance.ComponentInstance$ContainerStatusRetriever.run(ComponentInstance.java:620)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9903) Support reservations continue looking for Node Labels

2020-06-25 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144944#comment-17144944
 ] 

Jim Brennan commented on YARN-9903:
---

Thanks [~pbacsko]!
{quote}I'm not sure if it's really necessary to pass the label string, it 
depends on what {{reservedContainers}} contains. If it cannot contain nodes 
that belong to other partitions and we use exclusive labels then I think we are 
fine.
{quote}
I think you are correct on this point.


> Support reservations continue looking for Node Labels
> -
>
> Key: YARN-9903
> URL: https://issues.apache.org/jira/browse/YARN-9903
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Tarun Parimi
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-9903.001.patch, YARN-9903.002.patch, 
> YARN-9903.003.patch, YARN-9903.004.patch
>
>
> YARN-1769 brought in reservations continue looking feature which improves the 
> several resource reservation scenarios. However, it is not handled currently 
> when nodes have a label assigned to them. This is useful and in many cases 
> necessary even for Node Labels. So we should look to support this for node 
> labels also.
> For example, in AbstractCSQueue.java, we have the below TODO.
> {code:java}
> // TODO, now only consider reservation cases when the node has no label 
> if (this.reservationsContinueLooking && nodePartition.equals( 
> RMNodeLabelsManager.NO_LABEL) && Resources.greaterThan( resourceCalculator, 
> clusterResource, resourceCouldBeUnreserved, Resources.none())) {
> {code}
> cc [~sunilg]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144914#comment-17144914
 ] 

Hudáky Márton Gyula edited comment on YARN-10327 at 6/25/20, 1:43 PM:
--

TestLogsCLI#testInvalidApplicationId was removed and  the test case for 
checking invalid application ID in TestLogsCLI#testLogsCLIWithInvalidArgs is 
kept.


was (Author: mhudaky):
{{The removed test case looks like this:}}

 
 YarnClient mockYarnClient = createMockYarnClient(
 YarnApplicationState.FINISHED,
 UserGroupInformation.getCurrentUser().getShortUserName());
 LogsCLI cli = new LogsCLIForTest(mockYarnClient);
 cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});
 assertTrue(exitCode == -1);
 assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));
  

There is another test case in testLogsCLIWithInvalidArgs() testing the same 
failure:

 
 YarnClient mockYarnClient =
 createMockYarnClient(YarnApplicationState.FINISHED,
 UserGroupInformation.getCurrentUser().getShortUserName());
 LogsCLI cli = new LogsCLIForTest(mockYarnClient);
 cli.setConf(conf);// Specify an invalid applicationId
 int exitCode = cli.run(new String[]

{"-applicationId", "123"}

);
 assertTrue(exitCode == -1);
 assertTrue(sysErrStream.toString().contains(
 "Invalid ApplicationId specified"));
  

> Remove duplication of checking for invalid application ID in TestLogsCLI
> 
>
> Key: YARN-10327
> URL: https://issues.apache.org/jira/browse/YARN-10327
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Hudáky Márton Gyula
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
> Attachments: YARN-10327.001.patch
>
>
> TestLogsCLI has a separate function to test for invalid application ID 
> (#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
> multiple invalid arguments (including application ID). One of them should be 
> eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144914#comment-17144914
 ] 

Hudáky Márton Gyula edited comment on YARN-10327 at 6/25/20, 1:39 PM:
--

{{The removed test case looks like this:}}

 
 YarnClient mockYarnClient = createMockYarnClient(
 YarnApplicationState.FINISHED,
 UserGroupInformation.getCurrentUser().getShortUserName());
 LogsCLI cli = new LogsCLIForTest(mockYarnClient);
 cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});
 assertTrue(exitCode == -1);
 assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));
  

There is another test case in testLogsCLIWithInvalidArgs() testing the same 
failure:

 
 YarnClient mockYarnClient =
 createMockYarnClient(YarnApplicationState.FINISHED,
 UserGroupInformation.getCurrentUser().getShortUserName());
 LogsCLI cli = new LogsCLIForTest(mockYarnClient);
 cli.setConf(conf);// Specify an invalid applicationId
 int exitCode = cli.run(new String[]

{"-applicationId", "123"}

);
 assertTrue(exitCode == -1);
 assertTrue(sysErrStream.toString().contains(
 "Invalid ApplicationId specified"));
  


was (Author: mhudaky):
The removed test case looks like this:

 
YarnClient mockYarnClient = createMockYarnClient(
YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));
 

There is another test case in testLogsCLIWithInvalidArgs() testing the same 
failure:

 
YarnClient mockYarnClient =
createMockYarnClient(YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);// Specify an invalid applicationId
int exitCode = cli.run(new String[] {"-applicationId",
"123"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().contains(
"Invalid ApplicationId specified"));
 

> Remove duplication of checking for invalid application ID in TestLogsCLI
> 
>
> Key: YARN-10327
> URL: https://issues.apache.org/jira/browse/YARN-10327
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Hudáky Márton Gyula
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
> Attachments: YARN-10327.001.patch
>
>
> TestLogsCLI has a separate function to test for invalid application ID 
> (#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
> multiple invalid arguments (including application ID). One of them should be 
> eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144914#comment-17144914
 ] 

Hudáky Márton Gyula edited comment on YARN-10327 at 6/25/20, 1:39 PM:
--

The removed test case looks like this:

 
YarnClient mockYarnClient = createMockYarnClient(
YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));
 

There is another test case in testLogsCLIWithInvalidArgs() testing the same 
failure:

 
YarnClient mockYarnClient =
createMockYarnClient(YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);// Specify an invalid applicationId
int exitCode = cli.run(new String[] {"-applicationId",
"123"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().contains(
"Invalid ApplicationId specified"));
 


was (Author: mhudaky):
{{The removed test case looks like this:}}


{{ YarnClient mockYarnClient = createMockYarnClient(}}
{{ YarnApplicationState.FINISHED,}}
{{ UserGroupInformation.getCurrentUser().getShortUserName());}}
{{ LogsCLI cli = new LogsCLIForTest(mockYarnClient);}}
{{ cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});}}
{{ assertTrue(exitCode == -1);}}
{{ assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));}}


{{ There is another test case in testLogsCLIWithInvalidArgs() testing the same 
failure:}}


{{ YarnClient mockYarnClient =}}
{{ createMockYarnClient(YarnApplicationState.FINISHED,}}
{{ UserGroupInformation.getCurrentUser().getShortUserName());}}
{{ LogsCLI cli = new LogsCLIForTest(mockYarnClient);}}
{{ cli.setConf(conf);// Specify an invalid applicationId}}
{{ int exitCode = cli.run(new String[]}}

{"-applicationId", "123"}

);
 assertTrue(exitCode == -1);
 assertTrue(sysErrStream.toString().contains(
 "Invalid ApplicationId specified"));
  

> Remove duplication of checking for invalid application ID in TestLogsCLI
> 
>
> Key: YARN-10327
> URL: https://issues.apache.org/jira/browse/YARN-10327
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Hudáky Márton Gyula
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
> Attachments: YARN-10327.001.patch
>
>
> TestLogsCLI has a separate function to test for invalid application ID 
> (#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
> multiple invalid arguments (including application ID). One of them should be 
> eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144914#comment-17144914
 ] 

Hudáky Márton Gyula edited comment on YARN-10327 at 6/25/20, 1:37 PM:
--

{{The removed test case looks like this:}}


{{ YarnClient mockYarnClient = createMockYarnClient(}}
{{ YarnApplicationState.FINISHED,}}
{{ UserGroupInformation.getCurrentUser().getShortUserName());}}
{{ LogsCLI cli = new LogsCLIForTest(mockYarnClient);}}
{{ cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});}}
{{ assertTrue(exitCode == -1);}}
{{ assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));}}


{{ There is another test case in testLogsCLIWithInvalidArgs() testing the same 
failure:}}


{{ YarnClient mockYarnClient =}}
{{ createMockYarnClient(YarnApplicationState.FINISHED,}}
{{ UserGroupInformation.getCurrentUser().getShortUserName());}}
{{ LogsCLI cli = new LogsCLIForTest(mockYarnClient);}}
{{ cli.setConf(conf);// Specify an invalid applicationId}}
{{ int exitCode = cli.run(new String[]}}

{"-applicationId", "123"}

);
 assertTrue(exitCode == -1);
 assertTrue(sysErrStream.toString().contains(
 "Invalid ApplicationId specified"));
  


was (Author: mhudaky):
The removed test case looks like this:
YarnClient mockYarnClient = createMockYarnClient(
YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));
There is another test case in testLogsCLIWithInvalidArgs() testing the same 
failure:
YarnClient mockYarnClient =
createMockYarnClient(YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);// Specify an invalid applicationId
int exitCode = cli.run(new String[] {"-applicationId",
"123"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().contains(
"Invalid ApplicationId specified"));
 

> Remove duplication of checking for invalid application ID in TestLogsCLI
> 
>
> Key: YARN-10327
> URL: https://issues.apache.org/jira/browse/YARN-10327
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Hudáky Márton Gyula
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
> Attachments: YARN-10327.001.patch
>
>
> TestLogsCLI has a separate function to test for invalid application ID 
> (#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
> multiple invalid arguments (including application ID). One of them should be 
> eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144914#comment-17144914
 ] 

Hudáky Márton Gyula edited comment on YARN-10327 at 6/25/20, 1:36 PM:
--

The removed test case looks like this:
YarnClient mockYarnClient = createMockYarnClient(
YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));
There is another test case in testLogsCLIWithInvalidArgs() testing the same 
failure:
YarnClient mockYarnClient =
createMockYarnClient(YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);// Specify an invalid applicationId
int exitCode = cli.run(new String[] {"-applicationId",
"123"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().contains(
"Invalid ApplicationId specified"));
 


was (Author: mhudaky):
The removed test case looks like this:
YarnClient mockYarnClient = createMockYarnClient(
YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);int exitCode = cli.run( new String[] \{ "-applicationId", 
"not_an_app_id"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().startsWith("Invalid ApplicationId 
specified"));

This one is testing the very same error in testLogsCLIWithInvalidArgs() 
function and it is not removed:
YarnClient mockYarnClient =
createMockYarnClient(YarnApplicationState.FINISHED,
UserGroupInformation.getCurrentUser().getShortUserName());
LogsCLI cli = new LogsCLIForTest(mockYarnClient);
cli.setConf(conf);// Specify an invalid applicationId
int exitCode = cli.run(new String[] {"-applicationId",
"123"});
assertTrue(exitCode == -1);
assertTrue(sysErrStream.toString().contains(
"Invalid ApplicationId specified"));

> Remove duplication of checking for invalid application ID in TestLogsCLI
> 
>
> Key: YARN-10327
> URL: https://issues.apache.org/jira/browse/YARN-10327
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Hudáky Márton Gyula
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
> Attachments: YARN-10327.001.patch
>
>
> TestLogsCLI has a separate function to test for invalid application ID 
> (#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
> multiple invalid arguments (including application ID). One of them should be 
> eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hudáky Márton Gyula updated YARN-10327:
---
Attachment: (was: YARN-10327.001.patch)

> Remove duplication of checking for invalid application ID in TestLogsCLI
> 
>
> Key: YARN-10327
> URL: https://issues.apache.org/jira/browse/YARN-10327
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Hudáky Márton Gyula
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
>
> TestLogsCLI has a separate function to test for invalid application ID 
> (#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
> multiple invalid arguments (including application ID). One of them should be 
> eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-10327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hudáky Márton Gyula updated YARN-10327:
---
Attachment: YARN-10327.001.patch

> Remove duplication of checking for invalid application ID in TestLogsCLI
> 
>
> Key: YARN-10327
> URL: https://issues.apache.org/jira/browse/YARN-10327
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Hudáky Márton Gyula
>Assignee: Hudáky Márton Gyula
>Priority: Trivial
>
> TestLogsCLI has a separate function to test for invalid application ID 
> (#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
> multiple invalid arguments (including application ID). One of them should be 
> eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10327) Remove duplication of checking for invalid application ID in TestLogsCLI

2020-06-25 Thread Jira
Hudáky Márton Gyula created YARN-10327:
--

 Summary: Remove duplication of checking for invalid application ID 
in TestLogsCLI
 Key: YARN-10327
 URL: https://issues.apache.org/jira/browse/YARN-10327
 Project: Hadoop YARN
  Issue Type: Test
Reporter: Hudáky Márton Gyula
Assignee: Hudáky Márton Gyula


TestLogsCLI has a separate function to test for invalid application ID 
(#testInvalidApplicationId) and another (#testLogsCLIWithInvalidArgs) to test 
multiple invalid arguments (including application ID). One of them should be 
eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10318) ApplicationHistory Web UI incorrect column indexing

2020-06-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144787#comment-17144787
 ] 

Hadoop QA commented on YARN-10318:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 10 unchanged - 0 fixed = 11 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/26209/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10318 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006408/YARN-10318.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 282fd60da678 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4b5b54c73f2 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| checkstyle | 

[jira] [Comment Edited] (YARN-10318) ApplicationHistory Web UI incorrect column indexing

2020-06-25 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144742#comment-17144742
 ] 

Andras Gyori edited comment on YARN-10318 at 6/25/20, 8:19 AM:
---

This small fix extends AH Web UI with application tags column, which is in line 
with the current indexing as shown in the attached image. !Screenshot 
2020-06-25 at 10.15.32.png|width=939,height=195!


was (Author: gandras):
This small fix extends AH Web UI with application tags column, which is in line 
with the current indexing.

> ApplicationHistory Web UI incorrect column indexing
> ---
>
> Key: YARN-10318
> URL: https://issues.apache.org/jira/browse/YARN-10318
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: Screenshot 2020-06-25 at 10.15.32.png, 
> YARN-10318.001.patch, image-2020-06-16-17-14-55-921.png
>
>
> The ApplicationHistory UI is broken due to an incorrect column indexing. This 
> bug was probably introduced in YARN-10038, which presumes, that the table 
> contains the application tag column (which is true for RM Web UI, but not for 
> AH Web UI).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10318) ApplicationHistory Web UI incorrect column indexing

2020-06-25 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10318:

Attachment: Screenshot 2020-06-25 at 10.15.32.png

> ApplicationHistory Web UI incorrect column indexing
> ---
>
> Key: YARN-10318
> URL: https://issues.apache.org/jira/browse/YARN-10318
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Andras Gyori
>Assignee: Andras Gyori
>Priority: Minor
> Attachments: Screenshot 2020-06-25 at 10.15.32.png, 
> image-2020-06-16-17-14-55-921.png
>
>
> The ApplicationHistory UI is broken due to an incorrect column indexing. This 
> bug was probably introduced in YARN-10038, which presumes, that the table 
> contains the application tag column (which is true for RM Web UI, but not for 
> AH Web UI).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org