[jira] [Commented] (YARN-10247) Application priority queue ACLs are not respected

2020-04-28 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095107#comment-17095107
 ] 

Brahma Reddy Battula commented on YARN-10247:
-

[~prabhujoseph] and [~shuzirra] could you help to review this.. Looks 
straightforwad change.

> Application priority queue ACLs are not respected
> -
>
> Key: YARN-10247
> URL: https://issues.apache.org/jira/browse/YARN-10247
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-10247.0001.patch
>
>
> This is a regression from queue path jira.
> App priority acls are not working correctly. 
> {code:java}
> yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
> group=users max_priority=4]
> {code}
> max_priority enforcement is not working. For user john, maximum supported 
> priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095103#comment-17095103
 ] 

zhao yufei commented on YARN-10248:
---

[~ztang] has upload patch and submit patch

> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Assignee: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.2.1
>
> Attachments: YARN-10248-branch-3.2.001.path, 
> YARN-10248-branch-3.2.001.path
>
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> {code:java}
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> {code}
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhao yufei updated YARN-10248:
--
Attachment: YARN-10248-branch-3.2.001.path

> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Assignee: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
> Attachments: YARN-10248-branch-3.2.001.path, 
> YARN-10248-branch-3.2.001.path
>
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> {code:java}
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> {code}
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17095035#comment-17095035
 ] 

Zhankun Tang commented on YARN-10248:
-

[~jasstionzyf], Thanks for the contribution! Hadoop GitHub integration is not 
good enough due to the CI/CD.

Could you please generate a patch using "git diff branch-3.2...HEAD > 
YARN-10248-branch-3.2.001.path" and upload it here and click "submitPatch" to 
trigger the CI/CD?

> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Assignee: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> {code:java}
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> {code}
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread Zhankun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang reassigned YARN-10248:
---

Assignee: zhao yufei

> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Assignee: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> {code:java}
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> {code}
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094960#comment-17094960
 ] 

Íñigo Goiri commented on YARN-6553:
---

+1 on  [^YARN-6553.004.patch].

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch, YARN-6553.004.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094959#comment-17094959
 ] 

Íñigo Goiri commented on YARN-6973:
---

I haven't been tracking the tests in YARN lately so I'm not sure which ones are 
flaky these days.
[~BilwaST] could you double check TestCapacityOverTimePolicy?

> Adding RM Cluster Id in ApplicationReport
> -
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch, 
> YARN-6973.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094949#comment-17094949
 ] 

Hadoop QA commented on YARN-6553:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
50s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
44s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25951/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-6553 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001515/YARN-6553.004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml findbugs checkstyle |
| uname | Linux 921c5545dede 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 4202750040f |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| 

[jira] [Commented] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094927#comment-17094927
 ] 

Hadoop QA commented on YARN-6973:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
6s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
17s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m 
18s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}246m  8s{color} | 
{color:black} 

[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6553:

Attachment: YARN-6553.004.patch

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch, YARN-6553.004.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6553:

Attachment: (was: YARN-6553.004.patch)

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch, YARN-6553.004.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9017) PlacementRule order is not maintained in CS

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094895#comment-17094895
 ] 

Hadoop QA commented on YARN-9017:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
42s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 
43s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25950/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-9017 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001505/YARN-9017.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 106bad77d729 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / ab364295597 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25950/testReport/ |
| Max. process+thread count | 887 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Updated] (YARN-10251) Show extended resources on legacy RM UI.

2020-04-28 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-10251:
--
Description: It would be great to update the legacy RM UI to include GPU 
resources in the overview and in the per-app sections.

> Show extended resources on legacy RM UI.
> 
>
> Key: YARN-10251
> URL: https://issues.apache.org/jira/browse/YARN-10251
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
> Legacy RM UI With All Resources Shown.png
>
>
> It would be great to update the legacy RM UI to include GPU resources in the 
> overview and in the per-app sections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10251) Show extended resources on legacy RM UI.

2020-04-28 Thread Eric Payne (Jira)
Eric Payne created YARN-10251:
-

 Summary: Show extended resources on legacy RM UI.
 Key: YARN-10251
 URL: https://issues.apache.org/jira/browse/YARN-10251
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Eric Payne
Assignee: Eric Payne
 Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
Legacy RM UI With All Resources Shown.png





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10246) Enable Yarn Router to have a dedicated Zookeeper

2020-04-28 Thread D M Murali Krishna Reddy (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

D M Murali Krishna Reddy updated YARN-10246:

Attachment: YARN-10246.001.patch

> Enable Yarn Router to have a dedicated Zookeeper
> 
>
> Key: YARN-10246
> URL: https://issues.apache.org/jira/browse/YARN-10246
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: federation, router
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
> Attachments: YARN-10246.001.patch
>
>
> Currently, we have a single parameter hadoop.zk.address for Router and 
> Resourcemanager, Due to this we need have FederationStateStore and 
> RMStateStore on the same Zookeeper instance. 
> With the above topology there can be a load on ZooKeeper, since all 
> subcluster RMs will write to single ZooKeeper.
> So, If we Introduce a new configuration such as hadoop.federation.zk.address 
> we can have FederationStateStore on a dedicated Zookeeper.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10237) Add isAbsoluteResource config for queue in scheduler response

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094870#comment-17094870
 ] 

Hadoop QA commented on YARN-10237:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 31m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
58s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-3.3 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
57s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-3.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 128 unchanged - 1 fixed = 131 total (was 129) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 35s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}200m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25947/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10237 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001488/YARN-10237-branch-3.3.003.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 8174b1e51d5a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | branch-3.3 / e45faae |
| Default Java | 

[jira] [Commented] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094863#comment-17094863
 ] 

Hadoop QA commented on YARN-6553:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
50s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25949/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-6553 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001504/YARN-6553.004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml findbugs checkstyle |
| uname | Linux 8579d4c4e09c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / ab364295597 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| 

[jira] [Updated] (YARN-9017) PlacementRule order is not maintained in CS

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-9017:

Attachment: YARN-9017.001.patch

> PlacementRule order is not maintained in CS
> ---
>
> Key: YARN-9017
> URL: https://issues.apache.org/jira/browse/YARN-9017
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin Chundatt
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9017.001.patch
>
>
> {{yarn.scheduler.queue-placement-rules}} doesn't work as expected in Capacity 
> Scheduler
> {quote}
> * **Queue Mapping Interface based on Default or User Defined Placement 
> Rules** - This feature allows users to map a job to a specific queue based on 
> some default placement rule. For instance based on user & group, or 
> application name. User can also define their own placement rule.
> {quote}
> As per current UserGroupMapping is always added in placementRule. 
> {{CapacityScheduler#updatePlacementRules}}
> {code}
> // Initialize placement rules
> Collection placementRuleStrs = conf.getStringCollection(
> YarnConfiguration.QUEUE_PLACEMENT_RULES);
> List placementRules = new ArrayList<>();
> ...
> // add UserGroupMappingPlacementRule if absent
> distingushRuleSet.add(YarnConfiguration.USER_GROUP_PLACEMENT_RULE);
> {code}
> PlacementRule configuration order is not maintained 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8942) PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value

2020-04-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094818#comment-17094818
 ] 

Íñigo Goiri commented on YARN-8942:
---

Thanks [~BilwaST] for the patch.
* I don't think it should be "zero" the terminology but "No Active Subcluster", 
you may want to add more details even.
* Make the throw a single line.
* In the test, instead of fail() and catch, use LambdaTestUtils#intercept().

> PriorityBasedRouterPolicy throws exception if all sub-cluster weights have 
> negative value
> -
>
> Key: YARN-8942
> URL: https://issues.apache.org/jira/browse/YARN-8942
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8942.001.patch
>
>
> In *PriorityBasedRouterPolicy* if all sub-cluster weights are *set to 
> negative values* it is throwing exception while running a job.
> Ideally it should handle the negative priority as well according to the home 
> sub cluster selection process of the policy.
>  *Exception Details:*
> {code:java}
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Unable 
> to insert the ApplicationId application_1540356760422_0015 into the 
> FederationStateStore
> at 
> org.apache.hadoop.yarn.server.router.RouterServerUtil.logAndThrowException(RouterServerUtil.java:56)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:418)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: 
> org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreInvalidInputException:
>  Missing SubCluster Id information. Please try again by specifying Subcluster 
> Id information.
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator.checkSubClusterId(FederationMembershipStateStoreInputValidator.java:247)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.checkApplicationHomeSubCluster(FederationApplicationHomeSubClusterStoreInputValidator.java:160)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.validate(FederationApplicationHomeSubClusterStoreInputValidator.java:65)
> at 
> org.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore.addApplicationHomeSubCluster(ZookeeperFederationStateStore.java:159)
> at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy84.addApplicationHomeSubCluster(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:402)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:413)
>

[jira] [Commented] (YARN-8942) PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value

2020-04-28 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094808#comment-17094808
 ] 

Bilwa S T commented on YARN-8942:
-

cc [~elgoiri] 

> PriorityBasedRouterPolicy throws exception if all sub-cluster weights have 
> negative value
> -
>
> Key: YARN-8942
> URL: https://issues.apache.org/jira/browse/YARN-8942
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8942.001.patch
>
>
> In *PriorityBasedRouterPolicy* if all sub-cluster weights are *set to 
> negative values* it is throwing exception while running a job.
> Ideally it should handle the negative priority as well according to the home 
> sub cluster selection process of the policy.
>  *Exception Details:*
> {code:java}
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Unable 
> to insert the ApplicationId application_1540356760422_0015 into the 
> FederationStateStore
> at 
> org.apache.hadoop.yarn.server.router.RouterServerUtil.logAndThrowException(RouterServerUtil.java:56)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:418)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: 
> org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreInvalidInputException:
>  Missing SubCluster Id information. Please try again by specifying Subcluster 
> Id information.
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator.checkSubClusterId(FederationMembershipStateStoreInputValidator.java:247)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.checkApplicationHomeSubCluster(FederationApplicationHomeSubClusterStoreInputValidator.java:160)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.validate(FederationApplicationHomeSubClusterStoreInputValidator.java:65)
> at 
> org.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore.addApplicationHomeSubCluster(ZookeeperFederationStateStore.java:159)
> at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy84.addApplicationHomeSubCluster(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:402)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:413)
> ... 11 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Commented] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094806#comment-17094806
 ] 

Bilwa S T commented on YARN-6553:
-

Hi [~elgoiri] I have fixed it in latest patch. Please check

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch, YARN-6553.004.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6553:

Attachment: YARN-6553.004.patch

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch, YARN-6553.004.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-8942) PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-8942:

Comment: was deleted

(was: cc [~brahma] [~giovanni.fumarola])

> PriorityBasedRouterPolicy throws exception if all sub-cluster weights have 
> negative value
> -
>
> Key: YARN-8942
> URL: https://issues.apache.org/jira/browse/YARN-8942
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8942.001.patch
>
>
> In *PriorityBasedRouterPolicy* if all sub-cluster weights are *set to 
> negative values* it is throwing exception while running a job.
> Ideally it should handle the negative priority as well according to the home 
> sub cluster selection process of the policy.
>  *Exception Details:*
> {code:java}
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Unable 
> to insert the ApplicationId application_1540356760422_0015 into the 
> FederationStateStore
> at 
> org.apache.hadoop.yarn.server.router.RouterServerUtil.logAndThrowException(RouterServerUtil.java:56)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:418)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: 
> org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreInvalidInputException:
>  Missing SubCluster Id information. Please try again by specifying Subcluster 
> Id information.
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator.checkSubClusterId(FederationMembershipStateStoreInputValidator.java:247)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.checkApplicationHomeSubCluster(FederationApplicationHomeSubClusterStoreInputValidator.java:160)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.validate(FederationApplicationHomeSubClusterStoreInputValidator.java:65)
> at 
> org.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore.addApplicationHomeSubCluster(ZookeeperFederationStateStore.java:159)
> at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy84.addApplicationHomeSubCluster(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:402)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:413)
> ... 11 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Commented] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-28 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094795#comment-17094795
 ] 

Bilwa S T commented on YARN-6973:
-

Thanks [~elgoiri] for reviewing. I have fixed checkstyle issues and uploaded 
.003 patch

> Adding RM Cluster Id in ApplicationReport
> -
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch, 
> YARN-6973.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6973:

Attachment: YARN-6973.003.patch

> Adding RM Cluster Id in ApplicationReport
> -
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch, 
> YARN-6973.003.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094765#comment-17094765
 ] 

Íñigo Goiri commented on YARN-6553:
---

Let's fix the conf checkstyle and leave the other one.

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6973) Adding RM Cluster Id in ApplicationReport

2020-04-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094763#comment-17094763
 ] 

Íñigo Goiri commented on YARN-6973:
---

Other than the checkstyles left, this is good to go.

> Adding RM Cluster Id in ApplicationReport
> -
>
> Key: YARN-6973
> URL: https://issues.apache.org/jira/browse/YARN-6973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-6973.001.patch, YARN-6973.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9460) QueueACLsManager and ReservationsACLManager should not use instanceof checks

2020-04-28 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094724#comment-17094724
 ] 

Bilwa S T commented on YARN-9460:
-

Hi [~snemeth] 

I had one small doubt. So we make these as configurable or decide based on 
scheduler type??

> QueueACLsManager and ReservationsACLManager should not use instanceof checks
> 
>
> Key: YARN-9460
> URL: https://issues.apache.org/jira/browse/YARN-9460
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Bilwa S T
>Priority: Major
>
> QueueACLsManager and ReservationsACLManager should not use instanceof checks 
> for the scheduler type.
> Rather, we should abstract this into two classes: Capacity and Fair variants 
> of these ACL classes.
> QueueACLsManager and ReservationsACLManager could be abstract classes, but 
> the implementation is the decision of one who will work on this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10237) Add isAbsoluteResource config for queue in scheduler response

2020-04-28 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-10237:
-
Attachment: YARN-10237-branch-3.3.003.patch

> Add isAbsoluteResource config for queue in scheduler response
> -
>
> Key: YARN-10237
> URL: https://issues.apache.org/jira/browse/YARN-10237
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: YARN-10237-001.patch, YARN-10237-002.patch, 
> YARN-10237-003.patch, YARN-10237-branch-3.2.001.patch, 
> YARN-10237-branch-3.3.001.patch, YARN-10237-branch-3.3.002.patch, 
> YARN-10237-branch-3.3.003.patch
>
>
> Internal Config Management tools have difficulty in managing the capacity 
> scheduler queue configs if user toggles between Absolute Resource to 
> Percentage or vice versa.
> This jira is to expose if a queue is configured in absolute resource or not 
> as part of scheduler response.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10250) Container Relaunch - find: File system loop detected

2020-04-28 Thread Matthew Sharp (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094710#comment-17094710
 ] 

Matthew Sharp commented on YARN-10250:
--

I can submit a patch for this shortly based on the idea above.  I am open to 
other suggestions as well.

> Container Relaunch - find: File system loop detected
> 
>
> Key: YARN-10250
> URL: https://issues.apache.org/jira/browse/YARN-10250
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Matthew Sharp
>Priority: Major
>
> Hive LLAP YARN service tries to relaunch from a container failure and when it 
> retries on the same node we are seeing it fail with:
> {code:java}
> find: File system loop detected; ‘./lib/llap-27Apr2020.tar.gz’ is part of the 
> same file system loop as ‘./lib’. {code}
>  
> YARN-8667 attempted to clean up the prior symlinks before relaunching, but in 
> this case it still exists since it recreates the symlinks right before trying 
> to output to directory.info for logging.
>  
> The following line appears to be the culprit:  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L1346]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10250) Container Relaunch - find: File system loop detected

2020-04-28 Thread Matthew Sharp (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094707#comment-17094707
 ] 

Matthew Sharp commented on YARN-10250:
--

The launch-container script will fail on any non-zero return code, since that 
is debugging information only, one quick approach is to force those commands to 
always return true so the container relaunch is not impacted. 

> Container Relaunch - find: File system loop detected
> 
>
> Key: YARN-10250
> URL: https://issues.apache.org/jira/browse/YARN-10250
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Matthew Sharp
>Priority: Major
>
> Hive LLAP YARN service tries to relaunch from a container failure and when it 
> retries on the same node we are seeing it fail with:
> {code:java}
> find: File system loop detected; ‘./lib/llap-27Apr2020.tar.gz’ is part of the 
> same file system loop as ‘./lib’. {code}
>  
> YARN-8667 attempted to clean up the prior symlinks before relaunching, but in 
> this case it still exists since it recreates the symlinks right before trying 
> to output to directory.info for logging.
>  
> The following line appears to be the culprit:  
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L1346]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10250) Container Relaunch - find: File system loop detected

2020-04-28 Thread Matthew Sharp (Jira)
Matthew Sharp created YARN-10250:


 Summary: Container Relaunch - find: File system loop detected
 Key: YARN-10250
 URL: https://issues.apache.org/jira/browse/YARN-10250
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Matthew Sharp


Hive LLAP YARN service tries to relaunch from a container failure and when it 
retries on the same node we are seeing it fail with:
{code:java}
find: File system loop detected; ‘./lib/llap-27Apr2020.tar.gz’ is part of the 
same file system loop as ‘./lib’. {code}
 

YARN-8667 attempted to clean up the prior symlinks before relaunching, but in 
this case it still exists since it recreates the symlinks right before trying 
to output to directory.info for logging.

 

The following line appears to be the culprit:  
[https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L1346]

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10237) Add isAbsoluteResource config for queue in scheduler response

2020-04-28 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094705#comment-17094705
 ] 

Prabhu Joseph commented on YARN-10237:
--

[~snemeth] Have added untracked file by mistake, will fix it. Thanks.

> Add isAbsoluteResource config for queue in scheduler response
> -
>
> Key: YARN-10237
> URL: https://issues.apache.org/jira/browse/YARN-10237
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: YARN-10237-001.patch, YARN-10237-002.patch, 
> YARN-10237-003.patch, YARN-10237-branch-3.2.001.patch, 
> YARN-10237-branch-3.3.001.patch, YARN-10237-branch-3.3.002.patch
>
>
> Internal Config Management tools have difficulty in managing the capacity 
> scheduler queue configs if user toggles between Absolute Resource to 
> Percentage or vice versa.
> This jira is to expose if a queue is configured in absolute resource or not 
> as part of scheduler response.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10194) YARN RMWebServices /scheduler-conf/validate leaks ZK Connections

2020-04-28 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094702#comment-17094702
 ] 

Prabhu Joseph commented on YARN-10194:
--

Thanks [~snemeth].

> YARN RMWebServices /scheduler-conf/validate leaks ZK Connections
> 
>
> Key: YARN-10194
> URL: https://issues.apache.org/jira/browse/YARN-10194
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.3.0
>Reporter: Akhil PB
>Assignee: Prabhu Joseph
>Priority: Blocker
> Fix For: 3.3.0, 3.2.2, 3.4.0
>
> Attachments: YARN-10194-001.patch, YARN-10194-002.patch, 
> YARN-10194-003.patch, YARN-10194-004.patch, YARN-10194-005.patch, 
> YARN-10194-branch-3.2.001.patch
>
>
> YARN RMWebServices /scheduler-conf/validate leaks ZK Connections. Validation 
> API creates a new CapacityScheduler and missed to close after the validation. 
> Every CapacityScheduler#init opens MutableCSConfigurationProvider which opens 
> ZKConfigurationStore and creates a ZK Connection. 
> *ZK LOGS*
> {code}
> -03-12 16:45:51,881 WARN org.apache.zookeeper.server.NIOServerCnxnFactory: [2 
> times] Error accepting new connection: Too many connections from 
> /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,449 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,710 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,876 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [4 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:53,068 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [2 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:53,391 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [2 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,008 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,287 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,483 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [4 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> {code}
> And there is an another bug in ZKConfigurationStore which has not handled 
> close() of ZKCuratorManager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094701#comment-17094701
 ] 

Hudson commented on YARN-10215:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18194 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18194/])
YARN-10215. Endpoint for obtaining direct URL for the logs. Contributed 
(snemeth: rev ab3642955971dec1ce285f39cf0f02e6cc64b4b2)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-jhs-redirect-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-redirect-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app-jhs-redirect-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-jhs-redirect-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-redirect-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/log-adapter-helper.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-jhs-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/YarnWebServiceParams.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-jhs-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-redirect-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogServlet.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-jhs-redirect-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-jhs-redirect-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app-redirect-log.js
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-redirect-log.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-jhs-redirect-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app/logs.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-redirect-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-jhs-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app-jhs-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-jhs-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app-jhs-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-log.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-log.js


> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> 

[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10215:
--
Fix Version/s: (was: 3.3.1)

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10215:
--
Fix Version/s: 3.3.1

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Fix For: 3.3.0, 3.4.0, 3.3.1
>
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094687#comment-17094687
 ] 

Szilard Nemeth commented on YARN-10215:
---

Also pushed the commit to branch-3.3 as it had no conflicts while backporting 
it from trunk.
Resolving jira.

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10215:
--
Fix Version/s: 3.3.0

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10215:
--
Fix Version/s: 3.4.0

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094686#comment-17094686
 ] 

Szilard Nemeth commented on YARN-10215:
---

Thanks [~gandras],
Yesterday, we had a discussion with [~gandras] and [~adam.antal] about this 
patch and due to the nature of this change, Andras also demoed the changes on a 
live cluster.
Latest patch LGTM, committed to trunk.
Thanks [~adam.antal] for the reviews.

Trying to backport to branch-3.3

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10249) Various ResourceManager tests are failing on branch-3.2

2020-04-28 Thread Benjamin Teke (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Teke updated YARN-10249:
-
Description: Various tests are failing on branch-3.2. Some examples can be 
found in: YARN-10003, YARN-10002, YARN-10237. The seemingly common thing that 
all of the failing tests are RM/Capacity Scheduler related, and the failures 
are flaky.  (was: Various tests are failing on branch-3.2. Some examples can be 
found in: YARN-10003, YARN-10002, YARN-10237. The common thing is RM and the 
Capacity Scheduler.)

> Various ResourceManager tests are failing on branch-3.2
> ---
>
> Key: YARN-10249
> URL: https://issues.apache.org/jira/browse/YARN-10249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
>
> Various tests are failing on branch-3.2. Some examples can be found in: 
> YARN-10003, YARN-10002, YARN-10237. The seemingly common thing that all of 
> the failing tests are RM/Capacity Scheduler related, and the failures are 
> flaky.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10249) Various ResourceManager tests are failing on branch-3.2

2020-04-28 Thread Benjamin Teke (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Teke updated YARN-10249:
-
Description: Various tests are failing on branch-3.2. Some examples can be 
found in: YARN-10003, YARN-10002, YARN-10237. The common thing is RM and the 
Capacity Scheduler.  (was: Various tests are failing on branch-3.2. Some 
examples can be found in: YARN-10003, YARN-10002, YARN-10237.)

> Various ResourceManager tests are failing on branch-3.2
> ---
>
> Key: YARN-10249
> URL: https://issues.apache.org/jira/browse/YARN-10249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.2.0
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
>
> Various tests are failing on branch-3.2. Some examples can be found in: 
> YARN-10003, YARN-10002, YARN-10237. The common thing is RM and the Capacity 
> Scheduler.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10249) Various ResourceManager tests are failing on branch-3.2

2020-04-28 Thread Benjamin Teke (Jira)
Benjamin Teke created YARN-10249:


 Summary: Various ResourceManager tests are failing on branch-3.2
 Key: YARN-10249
 URL: https://issues.apache.org/jira/browse/YARN-10249
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.2.0
Reporter: Benjamin Teke
Assignee: Benjamin Teke


Various tests are failing on branch-3.2. Some examples can be found in: 
YARN-10003, YARN-10002, YARN-10237.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10237) Add isAbsoluteResource config for queue in scheduler response

2020-04-28 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094635#comment-17094635
 ] 

Szilard Nemeth commented on YARN-10237:
---

Hi [~prabhujoseph],
I can see an additional added file in the branch-3.3 patch that is not present 
in the trunk patch: 
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.LeafQueueTemplateInfo
Is this intentional?
If yes, can you please elaborate on why you need it? 
Thanks a lot.

> Add isAbsoluteResource config for queue in scheduler response
> -
>
> Key: YARN-10237
> URL: https://issues.apache.org/jira/browse/YARN-10237
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: YARN-10237-001.patch, YARN-10237-002.patch, 
> YARN-10237-003.patch, YARN-10237-branch-3.2.001.patch, 
> YARN-10237-branch-3.3.001.patch, YARN-10237-branch-3.3.002.patch
>
>
> Internal Config Management tools have difficulty in managing the capacity 
> scheduler queue configs if user toggles between Absolute Resource to 
> Percentage or vice versa.
> This jira is to expose if a queue is configured in absolute resource or not 
> as part of scheduler response.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10194) YARN RMWebServices /scheduler-conf/validate leaks ZK Connections

2020-04-28 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10194:
--
Fix Version/s: 3.2.2

> YARN RMWebServices /scheduler-conf/validate leaks ZK Connections
> 
>
> Key: YARN-10194
> URL: https://issues.apache.org/jira/browse/YARN-10194
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.3.0
>Reporter: Akhil PB
>Assignee: Prabhu Joseph
>Priority: Blocker
> Fix For: 3.3.0, 3.2.2, 3.4.0
>
> Attachments: YARN-10194-001.patch, YARN-10194-002.patch, 
> YARN-10194-003.patch, YARN-10194-004.patch, YARN-10194-005.patch, 
> YARN-10194-branch-3.2.001.patch
>
>
> YARN RMWebServices /scheduler-conf/validate leaks ZK Connections. Validation 
> API creates a new CapacityScheduler and missed to close after the validation. 
> Every CapacityScheduler#init opens MutableCSConfigurationProvider which opens 
> ZKConfigurationStore and creates a ZK Connection. 
> *ZK LOGS*
> {code}
> -03-12 16:45:51,881 WARN org.apache.zookeeper.server.NIOServerCnxnFactory: [2 
> times] Error accepting new connection: Too many connections from 
> /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,449 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,710 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,876 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [4 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:53,068 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [2 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:53,391 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [2 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,008 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,287 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,483 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [4 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> {code}
> And there is an another bug in ZKConfigurationStore which has not handled 
> close() of ZKCuratorManager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10194) YARN RMWebServices /scheduler-conf/validate leaks ZK Connections

2020-04-28 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094626#comment-17094626
 ] 

Szilard Nemeth commented on YARN-10194:
---

Hi [~prabhujoseph],
Thanks, branch-3.2 patch LGTM so committed it to that branch.
Thanks [~aajisaka] for the 3.3.0 cherry-pick.
Resolving jira.

> YARN RMWebServices /scheduler-conf/validate leaks ZK Connections
> 
>
> Key: YARN-10194
> URL: https://issues.apache.org/jira/browse/YARN-10194
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.3.0
>Reporter: Akhil PB
>Assignee: Prabhu Joseph
>Priority: Blocker
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10194-001.patch, YARN-10194-002.patch, 
> YARN-10194-003.patch, YARN-10194-004.patch, YARN-10194-005.patch, 
> YARN-10194-branch-3.2.001.patch
>
>
> YARN RMWebServices /scheduler-conf/validate leaks ZK Connections. Validation 
> API creates a new CapacityScheduler and missed to close after the validation. 
> Every CapacityScheduler#init opens MutableCSConfigurationProvider which opens 
> ZKConfigurationStore and creates a ZK Connection. 
> *ZK LOGS*
> {code}
> -03-12 16:45:51,881 WARN org.apache.zookeeper.server.NIOServerCnxnFactory: [2 
> times] Error accepting new connection: Too many connections from 
> /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,449 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,710 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,876 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [4 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:53,068 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [2 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:53,391 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [2 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,008 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,287 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,483 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [4 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> {code}
> And there is an another bug in ZKConfigurationStore which has not handled 
> close() of ZKCuratorManager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10194) YARN RMWebServices /scheduler-conf/validate leaks ZK Connections

2020-04-28 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10194:
--
Fix Version/s: 3.4.0

> YARN RMWebServices /scheduler-conf/validate leaks ZK Connections
> 
>
> Key: YARN-10194
> URL: https://issues.apache.org/jira/browse/YARN-10194
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.3.0
>Reporter: Akhil PB
>Assignee: Prabhu Joseph
>Priority: Blocker
> Fix For: 3.3.0, 3.4.0
>
> Attachments: YARN-10194-001.patch, YARN-10194-002.patch, 
> YARN-10194-003.patch, YARN-10194-004.patch, YARN-10194-005.patch, 
> YARN-10194-branch-3.2.001.patch
>
>
> YARN RMWebServices /scheduler-conf/validate leaks ZK Connections. Validation 
> API creates a new CapacityScheduler and missed to close after the validation. 
> Every CapacityScheduler#init opens MutableCSConfigurationProvider which opens 
> ZKConfigurationStore and creates a ZK Connection. 
> *ZK LOGS*
> {code}
> -03-12 16:45:51,881 WARN org.apache.zookeeper.server.NIOServerCnxnFactory: [2 
> times] Error accepting new connection: Too many connections from 
> /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,449 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,710 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:52,876 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [4 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:53,068 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [2 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:53,391 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [2 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,008 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,287 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: Error accepting new 
> connection: Too many connections from /172.27.99.64 - max is 60
> 2020-03-12 16:45:54,483 WARN 
> org.apache.zookeeper.server.NIOServerCnxnFactory: [4 times] Error accepting 
> new connection: Too many connections from /172.27.99.64 - max is 60
> {code}
> And there is an another bug in ZKConfigurationStore which has not handled 
> close() of ZKCuratorManager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8942) PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value

2020-04-28 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094584#comment-17094584
 ] 

Bilwa S T commented on YARN-8942:
-

cc [~brahma] [~giovanni.fumarola]

> PriorityBasedRouterPolicy throws exception if all sub-cluster weights have 
> negative value
> -
>
> Key: YARN-8942
> URL: https://issues.apache.org/jira/browse/YARN-8942
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akshay Agarwal
>Assignee: Bilwa S T
>Priority: Minor
> Attachments: YARN-8942.001.patch
>
>
> In *PriorityBasedRouterPolicy* if all sub-cluster weights are *set to 
> negative values* it is throwing exception while running a job.
> Ideally it should handle the negative priority as well according to the home 
> sub cluster selection process of the policy.
>  *Exception Details:*
> {code:java}
> java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Unable 
> to insert the ApplicationId application_1540356760422_0015 into the 
> FederationStateStore
> at 
> org.apache.hadoop.yarn.server.router.RouterServerUtil.logAndThrowException(RouterServerUtil.java:56)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:418)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.submitApplication(RouterClientRMService.java:218)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:282)
> at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:579)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: 
> org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreInvalidInputException:
>  Missing SubCluster Id information. Please try again by specifying Subcluster 
> Id information.
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationMembershipStateStoreInputValidator.checkSubClusterId(FederationMembershipStateStoreInputValidator.java:247)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.checkApplicationHomeSubCluster(FederationApplicationHomeSubClusterStoreInputValidator.java:160)
> at 
> org.apache.hadoop.yarn.server.federation.store.utils.FederationApplicationHomeSubClusterStoreInputValidator.validate(FederationApplicationHomeSubClusterStoreInputValidator.java:65)
> at 
> org.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore.addApplicationHomeSubCluster(ZookeeperFederationStateStore.java:159)
> at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy84.addApplicationHomeSubCluster(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade.addApplicationHomeSubCluster(FederationStateStoreFacade.java:402)
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.submitApplication(FederationClientInterceptor.java:413)
> ... 11 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org

[jira] [Commented] (YARN-10247) Application priority queue ACLs are not respected

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094506#comment-17094506
 ] 

Hadoop QA commented on YARN-10247:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 95 unchanged - 0 fixed = 96 total (was 95) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 95m 
21s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25946/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10247 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001446/YARN-10247.0001.patch 
|
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux dae1af387d59 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 5e0eda5d5f6 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| checkstyle | 

[jira] [Updated] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhao yufei updated YARN-10248:
--
Description: 
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:

{code:java}

yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  
{code}



then i running following command to test:

{code:java}
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves
{code}


iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 




  was:
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:

{code:java}

yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  
{code}



then i running following command to test:

{code:java}
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves
{code}


iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 




> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> {code:java}
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> {code}
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094493#comment-17094493
 ] 

zhao yufei commented on YARN-10248:
---

https://github.com/apache/hadoop/pull/1985

> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> {code:java}
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> {code}
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhao yufei updated YARN-10248:
--
Description: 
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:

{code:java}

yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  
{code}



then i running following command to test:

{code:java}
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves
{code}


iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 



  was:
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:

{code:java}

yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  
{code}



then i running following command to test:
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves

iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 




> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> {code:java}
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> {code}
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhao yufei updated YARN-10248:
--
Description: 
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:

{code:java}

yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  
{code}



then i running following command to test:
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves

iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 



  was:
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:
{{ 
yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  }}


then i running following command to test:
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves

iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 




> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {code:java}
> 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> {code}
> then i running following command to test:
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhao yufei updated YARN-10248:
--
Description: 
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:
{{ 
yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  }}


then i running following command to test:
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves

iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 



  was:
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:
 
yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  


then i running following command to test:
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves

iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 




> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
> {{ 
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   }}
> then i running following command to test:
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhao yufei updated YARN-10248:
--
Description: 
I have a server with two GPU, and i want to use only one of them within yarn 
cluster.
according to hadoop document, i set configs:
 
yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
0:1
  


yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
/etc/alternatives/x86_64-linux-gnu_nvidia_smi
  


then i running following command to test:
yarn jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
 -jar 
./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
-shell_command ' nvidia-smi & sleep 3  ' \
 -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
 -num_containers 1 -queue yufei -node_label_expression slaves

iI expected gpu with minor number 0 will not visible to container, but in the 
launched container, nvidia-smi  print two gpu information.


I check the related source code and find it is a bug.
the problem is:
when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
from it,  
then when assign to a container some of the gpus, it will set denied gpus for 
the container,
but it never consider excluded gpu of the host. 



> when config allowed-gpu-devices , excluded GPUs still be visible to containers
> --
>
> Key: YARN-10248
> URL: https://issues.apache.org/jira/browse/YARN-10248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.1
>Reporter: zhao yufei
>Priority: Minor
>  Labels: pull-request-available
>
> I have a server with two GPU, and i want to use only one of them within yarn 
> cluster.
> according to hadoop document, i set configs:
>  
> yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices
> 0:1
>   
> 
> 
> yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables
> /etc/alternatives/x86_64-linux-gnu_nvidia_smi
>   
> then i running following command to test:
> yarn jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar \
>  -jar 
> ./share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.1.jar  
> -shell_command ' nvidia-smi & sleep 3  ' \
>  -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1  \
>  -num_containers 1 -queue yufei -node_label_expression slaves
> iI expected gpu with minor number 0 will not visible to container, but in the 
> launched container, nvidia-smi  print two gpu information.
> I check the related source code and find it is a bug.
> the problem is:
> when you specify allowed-gpu-devices, GpuDiscoverer will populate usable gpus 
> from it,  
> then when assign to a container some of the gpus, it will set denied gpus for 
> the container,
> but it never consider excluded gpu of the host. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)
zhao yufei created YARN-10248:
-

 Summary: when config allowed-gpu-devices , excluded GPUs still be 
visible to containers
 Key: YARN-10248
 URL: https://issues.apache.org/jira/browse/YARN-10248
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.2.1
Reporter: zhao yufei






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10108) FS-CS converter: nestedUserQueue with default rule results in invalid queue mapping

2020-04-28 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko reassigned YARN-10108:
---

Assignee: Gergely Pollak  (was: Peter Bacsko)

> FS-CS converter: nestedUserQueue with default rule results in invalid queue 
> mapping
> ---
>
> Key: YARN-10108
> URL: https://issues.apache.org/jira/browse/YARN-10108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Gergely Pollak
>Priority: Major
>  Labels: fs2cs
>
> FS Queue Placement Policy
> {code:java}
> 
> 
> 
> 
> 
>  {code}
> gets mapped to an invalid CS queue mapping "u:%user:root.users.%user"
> RM fails to start with above queue mapping in CS
> {code:java}
> 2020-01-28 00:19:12,889 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: mapping 
> contains invalid or non-leaf queue [%user] and invalid parent queue 
> [root.users]
>   at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:173)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:829)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1247)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:324)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1534)
> Caused by: java.io.IOException: mapping contains invalid or non-leaf queue 
> [%user] and invalid parent queue [root.users]
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.QueuePlacementRuleUtils.validateQueueMappingUnderParentQueue(QueuePlacementRuleUtils.java:48)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.validateAndGetAutoCreatedQueueMapping(UserGroupMappingPlacementRule.java:363)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.placement.UserGroupMappingPlacementRule.initialize(UserGroupMappingPlacementRule.java:300)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.getUserGroupMappingPlacementRule(CapacityScheduler.java:671)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updatePlacementRules(CapacityScheduler.java:712)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:753)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initScheduler(CapacityScheduler.java:361)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.serviceInit(CapacityScheduler.java:426)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   ... 7 more
> {code}
> QueuePlacementConverter#handleNestedRule has to be fixed.
> {code:java}
> else if (pr instanceof DefaultPlacementRule) {
>   DefaultPlacementRule defaultRule = (DefaultPlacementRule) pr;
>   mapping.append("u:" + USER + ":")
> .append(defaultRule.defaultQueueName)
> .append("." + USER);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10245) Verbose logging in Capacity Scheduler

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094435#comment-17094435
 ] 

Hadoop QA commented on YARN-10245:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
24s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 97m 
29s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25944/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10245 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001436/YARN-10245-003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux d15246b39fb3 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 5e0eda5d5f6 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25944/testReport/ |
| Max. process+thread count | 831 (vs. ulimit of 5500) |
| 

[jira] [Commented] (YARN-9606) Set sslfactory for AuthenticatedURL() while creating LogsCLI#webServiceClient

2020-04-28 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094394#comment-17094394
 ] 

Prabhu Joseph commented on YARN-9606:
-

[~BilwaST] WIll review this patch. 

> Set sslfactory for AuthenticatedURL() while creating LogsCLI#webServiceClient 
> --
>
> Key: YARN-9606
> URL: https://issues.apache.org/jira/browse/YARN-9606
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9606-001.patch, YARN-9606-002.patch, 
> YARN-9606.003.patch
>
>
> Yarn logs fails for running containers    
>   
> 
>   {quote}                                                                     
>                           
>   
>
>  Unable to fetch log files list
>  Exception in thread "main" java.io.IOException: 
> com.sun.jersey.api.client.ClientHandlerException: 
> javax.net.ssl.SSLHandshakeException: Error while authenticating with 
> endpoint: 
> [https://vm2:65321/ws/v1/node/containers/container_e05_1559802125016_0001_01_08/logs]
>  at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.getContainerLogFiles(LogsCLI.java:543)
>  at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.getMatchedContainerLogFiles(LogsCLI.java:1338)
>  at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.getMatchedOptionForRunningApp(LogsCLI.java:1514)
>  at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchContainerLogs(LogsCLI.java:1052)
>  at org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:367)
>  at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:152)
>  at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:399)
>  {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10247) Application priority queue ACLs are not respected

2020-04-28 Thread Sunil G (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-10247:
---
Attachment: YARN-10247.0001.patch

> Application priority queue ACLs are not respected
> -
>
> Key: YARN-10247
> URL: https://issues.apache.org/jira/browse/YARN-10247
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-10247.0001.patch
>
>
> This is a regression from queue path jira.
> App priority acls are not working correctly. 
> {code:java}
> yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
> group=users max_priority=4]
> {code}
> max_priority enforcement is not working. For user john, maximum supported 
> priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10247) Application priority queue ACLs are not respected

2020-04-28 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094374#comment-17094374
 ] 

Sunil G commented on YARN-10247:


[~shuzirra] [~prabhujoseph] pls help to review this change.

> Application priority queue ACLs are not respected
> -
>
> Key: YARN-10247
> URL: https://issues.apache.org/jira/browse/YARN-10247
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-10247.0001.patch
>
>
> This is a regression from queue path jira.
> App priority acls are not working correctly. 
> {code:java}
> yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
> group=users max_priority=4]
> {code}
> max_priority enforcement is not working. For user john, maximum supported 
> priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094370#comment-17094370
 ] 

Hadoop QA commented on YARN-10215:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} jshint {color} | {color:blue}  0m  
0s{color} | {color:blue} jshint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
30s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 1 extant findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
21s{color} | {color:blue} branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui 
no findbugs output file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 31s{color} | {color:orange} root: The patch generated 1 new + 21 unchanged - 
3 fixed = 22 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
20s{color} | {color:blue} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui has no 
data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
8s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 41s{color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-mapreduce-client-hs in 

[jira] [Commented] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094366#comment-17094366
 ] 

Hadoop QA commented on YARN-6553:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25945/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-6553 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001438/YARN-6553.003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml findbugs checkstyle |
| uname | Linux 4e67f234a0e4 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 5e0eda5d5f6 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| 

[jira] [Commented] (YARN-9606) Set sslfactory for AuthenticatedURL() while creating LogsCLI#webServiceClient

2020-04-28 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094359#comment-17094359
 ] 

Bilwa S T commented on YARN-9606:
-

[~prabhujoseph] can you please review this when you get time?

> Set sslfactory for AuthenticatedURL() while creating LogsCLI#webServiceClient 
> --
>
> Key: YARN-9606
> URL: https://issues.apache.org/jira/browse/YARN-9606
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-9606-001.patch, YARN-9606-002.patch, 
> YARN-9606.003.patch
>
>
> Yarn logs fails for running containers    
>   
> 
>   {quote}                                                                     
>                           
>   
>
>  Unable to fetch log files list
>  Exception in thread "main" java.io.IOException: 
> com.sun.jersey.api.client.ClientHandlerException: 
> javax.net.ssl.SSLHandshakeException: Error while authenticating with 
> endpoint: 
> [https://vm2:65321/ws/v1/node/containers/container_e05_1559802125016_0001_01_08/logs]
>  at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.getContainerLogFiles(LogsCLI.java:543)
>  at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.getMatchedContainerLogFiles(LogsCLI.java:1338)
>  at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.getMatchedOptionForRunningApp(LogsCLI.java:1514)
>  at 
> org.apache.hadoop.yarn.client.cli.LogsCLI.fetchContainerLogs(LogsCLI.java:1052)
>  at org.apache.hadoop.yarn.client.cli.LogsCLI.runCommand(LogsCLI.java:367)
>  at org.apache.hadoop.yarn.client.cli.LogsCLI.run(LogsCLI.java:152)
>  at org.apache.hadoop.yarn.client.cli.LogsCLI.main(LogsCLI.java:399)
>  {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094354#comment-17094354
 ] 

Hadoop QA commented on YARN-10215:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} jshint {color} | {color:blue}  0m  
1s{color} | {color:blue} jshint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
19s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 1 extant findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
19s{color} | {color:blue} branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui 
no findbugs output file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  4s{color} | {color:orange} root: The patch generated 1 new + 21 unchanged - 
3 fixed = 22 total (was 24) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui has no 
data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
27s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
29s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
18s{color} | {color:green} 

[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6553:

Attachment: YARN-6553.003.patch

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6553:

Attachment: (was: YARN-6553.003.patch)

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10196) destroying app leaks zookeeper connection

2020-04-28 Thread kyungwan nam (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094308#comment-17094308
 ] 

kyungwan nam commented on YARN-10196:
-

Hi. [~prabhujoseph], this definitely seems like a bug.
Can you please take a look at this?
the patch works well in my cluster.
Thanks~

> destroying app leaks zookeeper connection
> -
>
> Key: YARN-10196
> URL: https://issues.apache.org/jira/browse/YARN-10196
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-10196.001.patch, YARN-10196.002.patch
>
>
> when destroying app, curatorClient in ServiceClient is started. but It is 
> never closed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10196) destroying app leaks zookeeper connection

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094282#comment-17094282
 ] 

Hadoop QA commented on YARN-10196:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
56s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
38s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.service.client.ServiceClient.curatorClient; locked 57% 
of time  Unsynchronized access at ServiceClient.java:57% of time  
Unsynchronized access at ServiceClient.java:[line 167] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25941/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-10196 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001420/YARN-10196.002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 10fe57f6eb44 

[jira] [Commented] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094278#comment-17094278
 ] 

Hadoop QA commented on YARN-6553:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
59s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25940/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-6553 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001418/YARN-6553.003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml findbugs checkstyle |
| uname | Linux ae7908a6f2f0 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 5e0eda5d5f6 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| 

[jira] [Comment Edited] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094242#comment-17094242
 ] 

Andras Gyori edited comment on YARN-10215 at 4/28/20, 8:03 AM:
---

Updated the patch to return the redirected url in a 200 response rather than in 
a 206 response. 


was (Author: gandras):
Updated the patch to return the redirected url in a 202 response rather than in 
a 206 response. 

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10215:

Attachment: (was: YARN-10025.004.patch)

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10215:

Attachment: YARN-10025.004.patch

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094242#comment-17094242
 ] 

Andras Gyori commented on YARN-10215:
-

Updated the patch to return the redirected url in a 202 response rather than a 
206. response. 

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094242#comment-17094242
 ] 

Andras Gyori edited comment on YARN-10215 at 4/28/20, 7:35 AM:
---

Updated the patch to return the redirected url in a 202 response rather than in 
a 206 response. 


was (Author: gandras):
Updated the patch to return the redirected url in a 202 response rather than a 
206. response. 

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10215) Endpoint for obtaining direct URL for the logs

2020-04-28 Thread Andras Gyori (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Gyori updated YARN-10215:

Attachment: YARN-10025.004.patch

> Endpoint for obtaining direct URL for the logs
> --
>
> Key: YARN-10215
> URL: https://issues.apache.org/jira/browse/YARN-10215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Andras Gyori
>Priority: Major
> Attachments: YARN-10025.001.patch, YARN-10025.002.patch, 
> YARN-10025.003.patch, YARN-10025.004.patch
>
>
> If CORS protected UIs are set up, there is an issue when the browser tries to 
> access the logs of a running container in the RM web UIv2.
> Assuming ATS is not up, the browser follows the following call chain:
> - Tries to access ATS, it fails, falls back to JHS
> - From RM the browser received basic app info, we know that the application 
> is running
> - From the JHS we got the list of containers and their log files.
> - When we try to access a specific log file, the JHS redirects the request to 
> the NM's UI (on which node the container is running). This redirect is 
> performed by the browser automatically. In this setup the host is considered 
> as a protected information, thus the browser omits the "Origin" field from 
> the request when this redirect is done. The browser then denies access to the 
> NodeManager's web UI due to the CORS header set up for NM, but the Origin is 
> null in the redirect request. 
> - Finally, "Logs are unavailable" message is shown in the RM web UIv2 due to 
> the CORS violation.
> We should fix this. As an approach we can expose another endpoints which only 
> returns the URL of the NodeManager what we should call directly from the UIv2 
> in order to receive the log. This adds a bit of a complexity, but will enable 
> users to keep the CORS protected setup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10247) Application priority queue ACLs are not respected

2020-04-28 Thread Sunil G (Jira)
Sunil G created YARN-10247:
--

 Summary: Application priority queue ACLs are not respected
 Key: YARN-10247
 URL: https://issues.apache.org/jira/browse/YARN-10247
 Project: Hadoop YARN
  Issue Type: Task
  Components: capacity scheduler
Reporter: Sunil G
Assignee: Sunil G


This is a regression from queue path jira.

App priority acls are not working correctly. 
{code:java}
yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
group=users max_priority=4]
{code}
max_priority enforcement is not working. For user john, maximum supported 
priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10196) destroying app leaks zookeeper connection

2020-04-28 Thread kyungwan nam (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-10196:

Attachment: YARN-10196.002.patch

> destroying app leaks zookeeper connection
> -
>
> Key: YARN-10196
> URL: https://issues.apache.org/jira/browse/YARN-10196
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-10196.001.patch, YARN-10196.002.patch
>
>
> when destroying app, curatorClient in ServiceClient is started. but It is 
> never closed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-6553:

Attachment: YARN-6553.003.patch

> Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests
> 
>
> Key: YARN-6553
> URL: https://issues.apache.org/jira/browse/YARN-6553
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Bilwa S T
>Priority: Major
> Attachments: YARN-6553.001.patch, YARN-6553.002.patch, 
> YARN-6553.003.patch
>
>
> Currently the AMRMProxy and Router tests use the 
> {{MockResourceManagerFacade}}. This jira proposes replacing it with 
> {{MockRM}} as is done in majority of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6553) Replace MockResourceManagerFacade with MockRM for AMRMProxy/Router tests

2020-04-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17094201#comment-17094201
 ] 

Hadoop QA commented on YARN-6553:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router: 
The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
27s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-YARN-Build/25939/artifact/out/Dockerfile
 |
| JIRA Issue | YARN-6553 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13001406/YARN-6553.002.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml findbugs checkstyle |
| uname | Linux 7ef83453942d 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 5e0eda5d5f6 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
|