[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: (was: YARN-10042.001.patch)

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> hadoop_build_x86_64_grpc_1.26.0.log, yarn_csi_tests_aarch64_grpc_1.26.0.log, 
> yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing, please see the attachment, they are: log of building on aarch64, log 
> of building on x86_64, log of running tests of yarn csi on aarch64, log of 
> running tests of yarn csi on x86_64.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: (was: YARN-10042.001.patch)

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: YARN-10042.001.patch, 
> hadoop_build_aarch64_grpc_1.26.0.log, hadoop_build_x86_64_grpc_1.26.0.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing, please see the attachment, they are: log of building on aarch64, log 
> of building on x86_64, log of running tests of yarn csi on aarch64, log of 
> running tests of yarn csi on x86_64.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: YARN-10042.001.patch

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: YARN-10042.001.patch, YARN-10042.001.patch, 
> hadoop_build_aarch64_grpc_1.26.0.log, hadoop_build_x86_64_grpc_1.26.0.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing, please see the attachment, they are: log of building on aarch64, log 
> of building on x86_64, log of running tests of yarn csi on aarch64, log of 
> running tests of yarn csi on x86_64.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10045) hostResolver should take DNS server‘s availability into account

2019-12-18 Thread chaoli (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chaoli updated YARN-10045:
--
Description: 
As we all  know, hostResolver in hadoop is implemented by 
*SecurityUitl.getByName*.

there are two implements of hostResolver. One is *QualifiedHostResolver*, and 
the other is *StandardHostResolver*. Both of them doesn't supply circuit 
breaker when dns server is not available.

My question is how can i failover quickly when dns server is not available? 
since dns server may not always work good.  Or , can I just use ip in 
nodemanager when in registering step to solve dns problem? what are the 
shortcomings if I use ip instead of host ? 

  was:
As we all  know, hostResolver in hadoop is implemented by 
SecurityUitl.getByName.

there are two implements of hostResolver. One is QualifiedHostResolver, and the 
other is StandardHostResolver. Both of them doesn't supply circuit breaker when 
dns server is not available.

My question is how can i failover quickly when dns server is not available? 
since dns server may not always work good.  Or , can I just use ip in 
nodemanager when in registering step to solve dns problem? what are the 
shortcomings if I use ip instead of host ? 


> hostResolver should take DNS server‘s availability into account
> ---
>
> Key: YARN-10045
> URL: https://issues.apache.org/jira/browse/YARN-10045
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: chaoli
>Priority: Major
>
> As we all  know, hostResolver in hadoop is implemented by 
> *SecurityUitl.getByName*.
> there are two implements of hostResolver. One is *QualifiedHostResolver*, and 
> the other is *StandardHostResolver*. Both of them doesn't supply circuit 
> breaker when dns server is not available.
> My question is how can i failover quickly when dns server is not available? 
> since dns server may not always work good.  Or , can I just use ip in 
> nodemanager when in registering step to solve dns problem? what are the 
> shortcomings if I use ip instead of host ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10045) hostResolver should take DNS server‘s availability into account

2019-12-18 Thread chaoli (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chaoli updated YARN-10045:
--
Description: 
As we all  know, hostResolver in hadoop is implemented by 
SecurityUitl.getByName.

there are two implements of hostResolver. One is QualifiedHostResolver, and the 
other is StandardHostResolver. Both of them doesn't supply circuit breaker when 
dns server is not available.

My question is how can i failover quickly when dns server is not available? 
since dns server may not always work good.  Or , can I just use ip in 
nodemanager when in registering step to solve dns problem? what are the 
shortcomings if I use ip instead of host ? 
Summary: hostResolver should take DNS server‘s availability into 
account  (was: Host Resolve should take DNS server )

> hostResolver should take DNS server‘s availability into account
> ---
>
> Key: YARN-10045
> URL: https://issues.apache.org/jira/browse/YARN-10045
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: chaoli
>Priority: Major
>
> As we all  know, hostResolver in hadoop is implemented by 
> SecurityUitl.getByName.
> there are two implements of hostResolver. One is QualifiedHostResolver, and 
> the other is StandardHostResolver. Both of them doesn't supply circuit 
> breaker when dns server is not available.
> My question is how can i failover quickly when dns server is not available? 
> since dns server may not always work good.  Or , can I just use ip in 
> nodemanager when in registering step to solve dns problem? what are the 
> shortcomings if I use ip instead of host ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10045) Host Resolve should take DNS server

2019-12-18 Thread chaoli (Jira)
chaoli created YARN-10045:
-

 Summary: Host Resolve should take DNS server 
 Key: YARN-10045
 URL: https://issues.apache.org/jira/browse/YARN-10045
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: chaoli






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10038) [UI] Finish Time is not correctly parsed in the RM Apps page

2019-12-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999630#comment-16999630
 ] 

Hadoop QA commented on YARN-10038:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: 
The patch generated 0 new + 56 unchanged - 2 fixed = 56 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
49s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 
22s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | YARN-10038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989143/YARN-10038.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0af5981f119b 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7b93575 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (YARN-10043) FairOrderingPolicy Improvements

2019-12-18 Thread Wilfred Spiegelenburg (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999627#comment-16999627
 ] 

Wilfred Spiegelenburg commented on YARN-10043:
--

Improvements are always welcome, specially if it helps bring us to one 
scheduler to do proper fair and capacity scheduling.
Giving some more specifics on what you are thinking about will help.

> FairOrderingPolicy Improvements
> ---
>
> Key: YARN-10043
> URL: https://issues.apache.org/jira/browse/YARN-10043
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Manikandan R
>Assignee: Manikandan R
>Priority: Major
>
> FairOrderingPolicy can be improved by using some of the approaches (only 
> relevant) implemented in FairSharePolicy of FS. This improvement has 
> significance in FS to CS migration context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10038) [UI] Finish Time is not correctly parsed in the RM Apps page

2019-12-18 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-10038:
---
Attachment: YARN-10038.003.patch

> [UI] Finish Time is not correctly parsed in the RM Apps page
> 
>
> Key: YARN-10038
> URL: https://issues.apache.org/jira/browse/YARN-10038
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: YARN-10038.000.patch, YARN-10038.001.patch, 
> YARN-10038.002.patch, YARN-10038.003.patch, image-2019-12-17-11-08-22-026.png
>
>
> The Finish Time shows as the unix time (millis since 1970) instead of as a 
> date:
>  !image-2019-12-17-11-08-22-026.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8292) Fix the dominant resource preemption cannot happen when some of the resource vector becomes negative

2019-12-18 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999525#comment-16999525
 ] 

Jonathan Hung commented on YARN-8292:
-

Thanks [~epayne] for the update. Should we just commit YARN-10033 to 
branch-2.10 to address the issue you fixed between 
[^YARN-8292.branch-2.010.patch] and [^YARN-8292.branch-2.10.011.patch]? Then we 
can commit [^YARN-8292.branch-2.010.patch] to branch-2.10.

> Fix the dominant resource preemption cannot happen when some of the resource 
> vector becomes negative
> 
>
> Key: YARN-8292
> URL: https://issues.apache.org/jira/browse/YARN-8292
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Wangda Tan
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8292.001.patch, YARN-8292.002.patch, 
> YARN-8292.003.patch, YARN-8292.004.patch, YARN-8292.005.patch, 
> YARN-8292.006.patch, YARN-8292.007.patch, YARN-8292.008.patch, 
> YARN-8292.009.patch, YARN-8292.branch-2.009.patch, 
> YARN-8292.branch-2.010.patch, YARN-8292.branch-2.10.011.patch
>
>
> This is an example of the problem: 
>   
> {code}
> //   guaranteed,  max,used,   pending
> "root(=[30:18:6  30:18:6 12:12:6 1:1:1]);" + //root
> "-a(=[10:6:2 10:6:2  6:6:3   0:0:0]);" + // a
> "-b(=[10:6:2 10:6:2  6:6:3   0:0:0]);" + // b
> "-c(=[10:6:2 10:6:2  0:0:0   1:1:1])"; // c
> {code}
> There're 3 resource types. Total resource of the cluster is 30:18:6
> For both of a/b, there're 3 containers running, each of container is 2:2:1.
> Queue c uses 0 resource, and have 1:1:1 pending resource.
> Under existing logic, preemption cannot happen.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9894) CapacitySchedulerPerf test for measuring hundreds of apps in a large number of queues.

2019-12-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999524#comment-16999524
 ] 

Hudson commented on YARN-9894:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17776 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17776/])
YARN-9894. CapacitySchedulerPerf test for measuring hundreds of apps in (jhung: 
rev 7b93575b92e8bad889c1ef15e0baaade6de6de4d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerPerf.java


> CapacitySchedulerPerf test for measuring hundreds of apps in a large number 
> of queues.
> --
>
> Key: YARN-9894
> URL: https://issues.apache.org/jira/browse/YARN-9894
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, test
>Affects Versions: 2.9.2, 2.8.5, 3.2.1, 3.1.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>
> Attachments: YARN-9894.001.patch, YARN-9894.002.patch
>
>
> I have developed a unit test based on the existing TestCapacitySchedulerPerf 
> tests that will measure the performance of a configurable number of apps in a 
> configurable number of queues. It will also test the performance of a cluster 
> that has many queues but only a portion of them are active.
> {code:title=For example:}
> $ mvn test 
> -Dtest=TestCapacitySchedulerPerf#testUserLimitThroughputWithManyQueues \
>   -DRunCapacitySchedulerPerfTests=true
>   -DNumberOfQueues=100 \
>   -DNumberOfApplications=200 \
>   -DPercentActiveQueues=100
> {code}
> - Parameters:
> -- RunCapacitySchedulerPerfTests=true:
> Needed in order to trigger the test
> -- NumberOfQueues
> Configurable number of queues
> -- NumberOfApplications
> Total number of apps to run in the whole cluster, distributed evenly across 
> all queues
> -- PercentActiveQueues
> Percentage of the queues that contain active applications



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9894) CapacitySchedulerPerf test for measuring hundreds of apps in a large number of queues.

2019-12-18 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999523#comment-16999523
 ] 

Eric Payne commented on YARN-9894:
--

Thanks [~jhung]!

> CapacitySchedulerPerf test for measuring hundreds of apps in a large number 
> of queues.
> --
>
> Key: YARN-9894
> URL: https://issues.apache.org/jira/browse/YARN-9894
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, test
>Affects Versions: 2.9.2, 2.8.5, 3.2.1, 3.1.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>
> Attachments: YARN-9894.001.patch, YARN-9894.002.patch
>
>
> I have developed a unit test based on the existing TestCapacitySchedulerPerf 
> tests that will measure the performance of a configurable number of apps in a 
> configurable number of queues. It will also test the performance of a cluster 
> that has many queues but only a portion of them are active.
> {code:title=For example:}
> $ mvn test 
> -Dtest=TestCapacitySchedulerPerf#testUserLimitThroughputWithManyQueues \
>   -DRunCapacitySchedulerPerfTests=true
>   -DNumberOfQueues=100 \
>   -DNumberOfApplications=200 \
>   -DPercentActiveQueues=100
> {code}
> - Parameters:
> -- RunCapacitySchedulerPerfTests=true:
> Needed in order to trigger the test
> -- NumberOfQueues
> Configurable number of queues
> -- NumberOfApplications
> Total number of apps to run in the whole cluster, distributed evenly across 
> all queues
> -- PercentActiveQueues
> Percentage of the queues that contain active applications



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10038) [UI] Finish Time is not correctly parsed in the RM Apps page

2019-12-18 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999515#comment-16999515
 ] 

Hadoop QA commented on YARN-10038:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 22 new 
+ 56 unchanged - 2 fixed = 78 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
28s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 13s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  org.apache.hadoop.yarn.server.resourcemanager.webapp.RMAppsBlock.COLUMNS 
should be package protected  At RMAppsBlock.java: At RMAppsBlock.java:[line 55] 
|
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:e573ea49085 |
| JIRA Issue | YARN-10038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989133/YARN-10038.002.patch |
| 

[jira] [Commented] (YARN-10039) Allow disabling app submission from REST endpoints

2019-12-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999443#comment-16999443
 ] 

Hudson commented on YARN-10039:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17774 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17774/])
YARN-10039. Allow disabling app submission from REST endpoints (jhung: rev 
fddc3d55c3e309936216b8c61944e11350e2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


> Allow disabling app submission from REST endpoints
> --
>
> Key: YARN-10039
> URL: https://issues.apache.org/jira/browse/YARN-10039
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>
> Attachments: YARN-10039.001.patch
>
>
> Introduce a configuration which allows disabling /apps/new-application and 
> /apps POST endpoints. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10038) [UI] Finish Time is not correctly parsed in the RM Apps page

2019-12-18 Thread Giovanni Matteo Fumarola (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999442#comment-16999442
 ] 

Giovanni Matteo Fumarola commented on YARN-10038:
-

Thanks [~elgoiri] for working on this and add a unit test that will prevent 
this to happen again.

I do not see any issue with [^YARN-10038.002.patch] . We can commit this after 
Yetus results.

> [UI] Finish Time is not correctly parsed in the RM Apps page
> 
>
> Key: YARN-10038
> URL: https://issues.apache.org/jira/browse/YARN-10038
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: YARN-10038.000.patch, YARN-10038.001.patch, 
> YARN-10038.002.patch, image-2019-12-17-11-08-22-026.png
>
>
> The Finish Time shows as the unix time (millis since 1970) instead of as a 
> date:
>  !image-2019-12-17-11-08-22-026.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10038) [UI] Finish Time is not correctly parsed in the RM Apps page

2019-12-18 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999433#comment-16999433
 ] 

Íñigo Goiri commented on YARN-10038:


I added a unit test in  [^YARN-10038.002.patch] trying to prevent this from 
happening again.
I had to do some refactoring and define the columns a little better.
Not the best approach ever but better than breaking this all the time.
A similar approach should be used for other columns (e.g., node page).

> [UI] Finish Time is not correctly parsed in the RM Apps page
> 
>
> Key: YARN-10038
> URL: https://issues.apache.org/jira/browse/YARN-10038
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: YARN-10038.000.patch, YARN-10038.001.patch, 
> YARN-10038.002.patch, image-2019-12-17-11-08-22-026.png
>
>
> The Finish Time shows as the unix time (millis since 1970) instead of as a 
> date:
>  !image-2019-12-17-11-08-22-026.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10038) [UI] Finish Time is not correctly parsed in the RM Apps page

2019-12-18 Thread Jira


 [ 
https://issues.apache.org/jira/browse/YARN-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated YARN-10038:
---
Attachment: YARN-10038.002.patch

> [UI] Finish Time is not correctly parsed in the RM Apps page
> 
>
> Key: YARN-10038
> URL: https://issues.apache.org/jira/browse/YARN-10038
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: YARN-10038.000.patch, YARN-10038.001.patch, 
> YARN-10038.002.patch, image-2019-12-17-11-08-22-026.png
>
>
> The Finish Time shows as the unix time (millis since 1970) instead of as a 
> date:
>  !image-2019-12-17-11-08-22-026.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9998) Code cleanup in LeveldbConfigurationStore

2019-12-18 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9998:
-
Description: 
Many things can be improved:
* Field compactionTimer could be a local variable
* Field versiondb should be camelcase
* initDatabase is a very long method: Initialize db / versionDb should be in 
separate methods, split this method into smaller chunks
* Remove TODOs
* Remove duplicated code block in LeveldbConfigurationStore.CompactionTimerTask
* Any other cleanup

  was:
Many thins can be improved:
* Field compactionTimer could be a local variable
* Field versiondb should be camelcase
* initDatabase is a very long method: Initialize db / versionDb should be in 
separate methods, split this method into smaller chunks
* Remove TODOs
* Remove duplicated code block in LeveldbConfigurationStore.CompactionTimerTask
* Any other cleanup


> Code cleanup in LeveldbConfigurationStore
> -
>
> Key: YARN-9998
> URL: https://issues.apache.org/jira/browse/YARN-9998
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
>
> Many things can be improved:
> * Field compactionTimer could be a local variable
> * Field versiondb should be camelcase
> * initDatabase is a very long method: Initialize db / versionDb should be in 
> separate methods, split this method into smaller chunks
> * Remove TODOs
> * Remove duplicated code block in 
> LeveldbConfigurationStore.CompactionTimerTask
> * Any other cleanup



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10035) Add ability to filter the Cluster Applications API request by name

2019-12-18 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999423#comment-16999423
 ] 

Szilard Nemeth edited comment on YARN-10035 at 12/18/19 6:45 PM:
-

Hi [~adam.antal]!
Thanks for this patch.
In WebServices.getApps, you added an if: 
{code:java}
if (nameQuery != null && nameQuery.equals(appReport.getName())) {
continue;
}
{code}

Shouldn't the second part of the statement be? : 

{code:java}
!nameQuery.equals(appReport.getName())
{code}
Actually, you are executing the continue statement if the app name in the query 
matches the name from the appReport, so apps with provided name are skipped.
Am I misunderstanding something?
Otherwise, the patch looks good.



was (Author: snemeth):
Hi [~adam.antal]!
Thanks for this patch.
In WebServices.getApps, you added an if: 

{code:java}
  if (nameQuery != null && nameQuery.equals(appReport.getName())) {
continue;
  }
{code}

Shouldn't the second part of the statement be? : 

{code:java}
!nameQuery.equals(appReport.getName())
{code}
Actually, you are executing the continue statement if the app name in the query 
matches the name from the appReport, so apps with provided name are skipped.
Am I misunderstanding something?
Otherwise, the patch looks good.


> Add ability to filter the Cluster Applications API request by name
> --
>
> Key: YARN-10035
> URL: https://issues.apache.org/jira/browse/YARN-10035
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-10035.001.patch, YARN-10035.002.patch
>
>
> According to the 
> [documentation|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html]
>  we don't support filtering by name in the Cluster Applications API request.
> Usually application tags are a perfect way for tracking applications, but for 
> MR applications the older CLIs usually doesn't support providing app tags, 
> while specifying the name of the job is possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10044) ResourceManager NPE - Error in handling event type NODE_UPDATE to the Event Dispatcher

2019-12-18 Thread Jon Bringhurst (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Bringhurst updated YARN-10044:
--
Issue Type: Bug  (was: Improvement)

> ResourceManager NPE - Error in handling event type NODE_UPDATE to the Event 
> Dispatcher
> --
>
> Key: YARN-10044
> URL: https://issues.apache.org/jira/browse/YARN-10044
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Jon Bringhurst
>Priority: Major
>
> {noformat}
> 2019-12-18 00:46:42,577 [INFO] [IPC Server handler 48 on 8030] 
> resourcemanager.RMAuditLogger.logSuccess(RMAuditLogger.java:200) - 
> USER=vapp5003 IP=10.186.103.102   OPERATION=AM Released Container 
> TARGET=SchedulerApp RESULT=SUCCESS  APPID=
> application_1575937033226_0426
> CONTAINERID=container_e18_1575937033226_0426_01_000389  
> RESOURCE=
> 2019-12-18 00:46:42,577 [INFO] [IPC Server handler 48 on 8030] 
> rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:486) - 
> container_e18_1575937033226_0426_01_000392 Container Transitioned from 
> ACQUIRED to RELEASED
> 2019-12-18 00:46:42,578 [INFO] [IPC Server handler 48 on 8030] 
> resourcemanager.RMAuditLogger.logSuccess(RMAuditLogger.java:200) - 
> USER=vapp5003 IP=10.186.103.102   OPERATION=AM Released Container 
> TARGET=SchedulerApp RESULT=SUCCESS  APPID=
> application_1575937033226_0426
> CONTAINERID=container_e18_1575937033226_0426_01_000392  
> RESOURCE=
> 2019-12-18 00:46:42,578 [INFO] [SchedulerEventDispatcher:Event Processor] 
> allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(AbstractContainerAllocator.java:126)
>  - assignedContainer application attempt=appattempt_1575937033226
> _0426_01 container=null 
> queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@73179037
>  clusterResource= type=OFF_SWITCH 
> requestedPartition=concourse
> 2019-12-18 00:46:42,578 [INFO] [IPC Server handler 48 on 8030] 
> rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:486) - 
> container_e18_1575937033226_0426_01_000393 Container Transitioned from 
> ACQUIRED to RELEASED
> 2019-12-18 00:46:42,578 [INFO] [SchedulerEventDispatcher:Event Processor] 
> capacity.ParentQueue.assignContainers(ParentQueue.java:616) - 
> assignedContainer queue=root usedCapacity=0.68548673 
> absoluteUsedCapacity=0.68548673 used= ores:11062> cluster=
> 2019-12-18 00:46:42,578 [INFO] [IPC Server handler 48 on 8030] 
> resourcemanager.RMAuditLogger.logSuccess(RMAuditLogger.java:200) - 
> USER=vapp5003 IP=10.186.103.102   OPERATION=AM Released Container 
> TARGET=SchedulerApp RESULT=SUCCESS  APPID=
> application_1575937033226_0426
> CONTAINERID=container_e18_1575937033226_0426_01_000393  
> RESOURCE=
> 2019-12-18 00:46:42,579 [INFO] [IPC Server handler 48 on 8030] 
> rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:486) - 
> container_e18_1575937033226_0426_01_000394 Container Transitioned from 
> ACQUIRED to RELEASED
> 2019-12-18 00:46:42,579 [INFO] [IPC Server handler 48 on 8030] 
> resourcemanager.RMAuditLogger.logSuccess(RMAuditLogger.java:200) - 
> USER=vapp5003 IP=10.186.103.102   OPERATION=AM Released Container 
> TARGET=SchedulerApp RESULT=SUCCESS  APPID=
> application_1575937033226_0426
> CONTAINERID=container_e18_1575937033226_0426_01_000394  
> RESOURCE=
> 2019-12-18 00:46:42,579 [INFO] [IPC Server handler 48 on 8030] 
> scheduler.AppSchedulingInfo.updatePendingResources(AppSchedulingInfo.java:250)
>  - checking for deactivate of application :application_1575937033226_0426
> 2019-12-18 00:46:42,580 [FATAL] [SchedulerEventDispatcher:Event Processor] 
> event.EventDispatcher$EventProcessor.run(EventDispatcher.java:75) - Error in 
> handling event type NODE_UPDATE to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:533)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2563)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2429)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1359)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1348)
> at 
> 

[jira] [Comment Edited] (YARN-10035) Add ability to filter the Cluster Applications API request by name

2019-12-18 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999423#comment-16999423
 ] 

Szilard Nemeth edited comment on YARN-10035 at 12/18/19 6:45 PM:
-

Hi [~adam.antal]!
Thanks for this patch.
In WebServices.getApps, you added an if: 

{code:java}
  if (nameQuery != null && nameQuery.equals(appReport.getName())) {
continue;
  }
{code}

Shouldn't the second part of the statement be? : 

{code:java}
!nameQuery.equals(appReport.getName())
{code}
Actually, you are executing the continue statement if the app name in the query 
matches the name from the appReport, so apps with provided name are skipped.
Am I misunderstanding something?
Otherwise, the patch looks good.



was (Author: snemeth):
Hi [~adam.antal]!
Thanks for this patch.
In WebServices.getApps, you added an if: 

{code:java}

  if (nameQuery != null && nameQuery.equals(appReport.getName())) {
continue;
  }
{code}

Shouldn't the second part of the statement be? : 

{code:java}
!nameQuery.equals(appReport.getName())
{code}
Actually, you are executing the continue statement if the app name in the query 
matches the name from the appReport, so apps with provided name are skipped.
Am I misunderstanding something?
Otherwise, the patch looks good.


> Add ability to filter the Cluster Applications API request by name
> --
>
> Key: YARN-10035
> URL: https://issues.apache.org/jira/browse/YARN-10035
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-10035.001.patch, YARN-10035.002.patch
>
>
> According to the 
> [documentation|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html]
>  we don't support filtering by name in the Cluster Applications API request.
> Usually application tags are a perfect way for tracking applications, but for 
> MR applications the older CLIs usually doesn't support providing app tags, 
> while specifying the name of the job is possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10035) Add ability to filter the Cluster Applications API request by name

2019-12-18 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999423#comment-16999423
 ] 

Szilard Nemeth commented on YARN-10035:
---

Hi [~adam.antal]!
Thanks for this patch.
In WebServices.getApps, you added an if: 

{code:java}

  if (nameQuery != null && nameQuery.equals(appReport.getName())) {
continue;
  }
{code}

Shouldn't the second part of the statement be? : 

{code:java}
!nameQuery.equals(appReport.getName())
{code}
Actually, you are executing the continue statement if the app name in the query 
matches the name from the appReport, so apps with provided name are skipped.
Am I misunderstanding something?
Otherwise, the patch looks good.


> Add ability to filter the Cluster Applications API request by name
> --
>
> Key: YARN-10035
> URL: https://issues.apache.org/jira/browse/YARN-10035
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-10035.001.patch, YARN-10035.002.patch
>
>
> According to the 
> [documentation|https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html]
>  we don't support filtering by name in the Cluster Applications API request.
> Usually application tags are a perfect way for tracking applications, but for 
> MR applications the older CLIs usually doesn't support providing app tags, 
> while specifying the name of the job is possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10044) ResourceManager NPE - Error in handling event type NODE_UPDATE to the Event Dispatcher

2019-12-18 Thread Jon Bringhurst (Jira)
Jon Bringhurst created YARN-10044:
-

 Summary: ResourceManager NPE - Error in handling event type 
NODE_UPDATE to the Event Dispatcher
 Key: YARN-10044
 URL: https://issues.apache.org/jira/browse/YARN-10044
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.9.2
Reporter: Jon Bringhurst


{noformat}
2019-12-18 00:46:42,577 [INFO] [IPC Server handler 48 on 8030] 
resourcemanager.RMAuditLogger.logSuccess(RMAuditLogger.java:200) - 
USER=vapp5003 IP=10.186.103.102   OPERATION=AM Released Container 
TARGET=SchedulerApp RESULT=SUCCESS  APPID=
application_1575937033226_0426
CONTAINERID=container_e18_1575937033226_0426_01_000389  RESOURCE=
2019-12-18 00:46:42,577 [INFO] [IPC Server handler 48 on 8030] 
rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:486) - 
container_e18_1575937033226_0426_01_000392 Container Transitioned from ACQUIRED 
to RELEASED
2019-12-18 00:46:42,578 [INFO] [IPC Server handler 48 on 8030] 
resourcemanager.RMAuditLogger.logSuccess(RMAuditLogger.java:200) - 
USER=vapp5003 IP=10.186.103.102   OPERATION=AM Released Container 
TARGET=SchedulerApp RESULT=SUCCESS  APPID=
application_1575937033226_0426
CONTAINERID=container_e18_1575937033226_0426_01_000392  RESOURCE=
2019-12-18 00:46:42,578 [INFO] [SchedulerEventDispatcher:Event Processor] 
allocator.AbstractContainerAllocator.getCSAssignmentFromAllocateResult(AbstractContainerAllocator.java:126)
 - assignedContainer application attempt=appattempt_1575937033226
_0426_01 container=null 
queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@73179037
 clusterResource= type=OFF_SWITCH 
requestedPartition=concourse
2019-12-18 00:46:42,578 [INFO] [IPC Server handler 48 on 8030] 
rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:486) - 
container_e18_1575937033226_0426_01_000393 Container Transitioned from ACQUIRED 
to RELEASED
2019-12-18 00:46:42,578 [INFO] [SchedulerEventDispatcher:Event Processor] 
capacity.ParentQueue.assignContainers(ParentQueue.java:616) - assignedContainer 
queue=root usedCapacity=0.68548673 absoluteUsedCapacity=0.68548673 
used= cluster=
2019-12-18 00:46:42,578 [INFO] [IPC Server handler 48 on 8030] 
resourcemanager.RMAuditLogger.logSuccess(RMAuditLogger.java:200) - 
USER=vapp5003 IP=10.186.103.102   OPERATION=AM Released Container 
TARGET=SchedulerApp RESULT=SUCCESS  APPID=
application_1575937033226_0426
CONTAINERID=container_e18_1575937033226_0426_01_000393  RESOURCE=
2019-12-18 00:46:42,579 [INFO] [IPC Server handler 48 on 8030] 
rmcontainer.RMContainerImpl.handle(RMContainerImpl.java:486) - 
container_e18_1575937033226_0426_01_000394 Container Transitioned from ACQUIRED 
to RELEASED
2019-12-18 00:46:42,579 [INFO] [IPC Server handler 48 on 8030] 
resourcemanager.RMAuditLogger.logSuccess(RMAuditLogger.java:200) - 
USER=vapp5003 IP=10.186.103.102   OPERATION=AM Released Container 
TARGET=SchedulerApp RESULT=SUCCESS  APPID=
application_1575937033226_0426
CONTAINERID=container_e18_1575937033226_0426_01_000394  RESOURCE=
2019-12-18 00:46:42,579 [INFO] [IPC Server handler 48 on 8030] 
scheduler.AppSchedulingInfo.updatePendingResources(AppSchedulingInfo.java:250) 
- checking for deactivate of application :application_1575937033226_0426
2019-12-18 00:46:42,580 [FATAL] [SchedulerEventDispatcher:Event Processor] 
event.EventDispatcher$EventProcessor.run(EventDispatcher.java:75) - Error in 
handling event type NODE_UPDATE to the Event Dispatcher
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:448)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:533)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2563)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2429)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1359)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1348)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1437)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1208)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:1070)
at 

[jira] [Commented] (YARN-10039) Allow disabling app submission from REST endpoints

2019-12-18 Thread Haibo Chen (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999400#comment-16999400
 ] 

Haibo Chen commented on YARN-10039:
---

I see. +1 on the patch.

> Allow disabling app submission from REST endpoints
> --
>
> Key: YARN-10039
> URL: https://issues.apache.org/jira/browse/YARN-10039
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-10039.001.patch
>
>
> Introduce a configuration which allows disabling /apps/new-application and 
> /apps POST endpoints. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10043) FairOrderingPolicy Improvements

2019-12-18 Thread Manikandan R (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999360#comment-16999360
 ] 

Manikandan R commented on YARN-10043:
-

Will work on the specifics and post updates for feedback.

[~sunilg] [~leftnoteasy] [~wilfreds] Would it add value to the system? What do 
you think?

> FairOrderingPolicy Improvements
> ---
>
> Key: YARN-10043
> URL: https://issues.apache.org/jira/browse/YARN-10043
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Manikandan R
>Assignee: Manikandan R
>Priority: Major
>
> FairOrderingPolicy can be improved by using some of the approaches (only 
> relevant) implemented in FairSharePolicy of FS. This improvement has 
> significance in FS to CS migration context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10043) FairOrderingPolicy Improvements

2019-12-18 Thread Manikandan R (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-10043:

Parent: YARN-9698
Issue Type: Sub-task  (was: Bug)

> FairOrderingPolicy Improvements
> ---
>
> Key: YARN-10043
> URL: https://issues.apache.org/jira/browse/YARN-10043
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Manikandan R
>Assignee: Manikandan R
>Priority: Major
>
> FairOrderingPolicy can be improved by using some of the approaches (only 
> relevant) implemented in FairSharePolicy of FS. This improvement has 
> significance in FS to CS migration context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10043) FairOrderingPolicy Improvements

2019-12-18 Thread Manikandan R (Jira)
Manikandan R created YARN-10043:
---

 Summary: FairOrderingPolicy Improvements
 Key: YARN-10043
 URL: https://issues.apache.org/jira/browse/YARN-10043
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Manikandan R
Assignee: Manikandan R


FairOrderingPolicy can be improved by using some of the approaches (only 
relevant) implemented in FairSharePolicy of FS. This improvement has 
significance in FS to CS migration context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10033) TestProportionalCapacityPreemptionPolicy not initializing vcores for effective max resources

2019-12-18 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999203#comment-16999203
 ] 

Eric Payne commented on YARN-10033:
---

Thanks [~ebadger]!

> TestProportionalCapacityPreemptionPolicy not initializing vcores for 
> effective max resources
> 
>
> Key: YARN-10033
> URL: https://issues.apache.org/jira/browse/YARN-10033
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, test
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.1.4
>
> Attachments: YARN-10033.001.patch, YARN-10033.002.patch, 
> YARN-10033.003.patch
>
>
> TestProportionalCapacityPreemptionPolicy#testPreemptionWithVCoreResource is 
> preempting more containers than would happen on a real cluster.
> This is because the process for mocking CS queues in 
> {{TestProportionalCapacityPreemptionPolicy}} fails to take into consideration 
> vcores when mocking effective max resources.
> This causes miscalculations for how many vcores to preempt when the DRF is 
> being used in the test:
> {code:title=TempQueuePerPartition#offer}
> Resource absMaxCapIdealAssignedDelta = Resources.componentwiseMax(
> Resources.subtract(getMax(), idealAssigned),
> Resource.newInstance(0, 0));
> {code}
> In the above code, the preemption policy is offering resources to an 
> underserved queue. {{getMax()}} will use the effective max resource if it 
> exists. Since this test is mocking effective max resources, it will return 
> that value. However, since the mock doesn't include vcores, the test treats 
> memory as the dominant resource and awards too many preempted containers to 
> the underserved queue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: yarn_csi_tests_x86_64_grpc_1.26.0.log

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> hadoop_build_x86_64_grpc_1.26.0.log, yarn_csi_tests_aarch64_grpc_1.26.0.log, 
> yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing, please see the attachment, they are: log of building on aarch64, log 
> of building on x86_64, log of running tests of yarn csi on aarch64, log of 
> running tests of yarn csi on x86_64.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: (was: yarn_csi_tests_x86_64_grpc_1.26.0.log)

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> hadoop_build_x86_64_grpc_1.26.0.log, yarn_csi_tests_aarch64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing, please see the attachment, they are: log of building on aarch64, log 
> of building on x86_64, log of running tests of yarn csi on aarch64, log of 
> running tests of yarn csi on x86_64.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: hadoop_build_x86_64_grpc_1.26.0.log

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> hadoop_build_x86_64_grpc_1.26.0.log, yarn_csi_tests_aarch64_grpc_1.26.0.log, 
> yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing, please see the attachment, they are: log of building on aarch64, log 
> of building on x86_64, log of running tests of yarn csi on aarch64, log of 
> running tests of yarn csi on x86_64.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: (was: hadoop_build_x86_64_grpc_1.26.0.log.log)

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing, please see the attachment, they are: log of building on aarch64, log 
> of building on x86_64, log of running tests of yarn csi on aarch64, log of 
> running tests of yarn csi on x86_64.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Description: 
For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
central.

see:

[https://github.com/grpc/grpc-java/pull/6496]

[https://search.maven.org/search?q=g:io.grpc]

 It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
version. both x86_64 and aarch64 server are building OK accroding to my 
testing, please see the attachment, they are: log of building on aarch64, log 
of building on x86_64, log of running tests of yarn csi on aarch64, log of 
running tests of yarn csi on x86_64.

  was:
For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
central.

see:

[https://github.com/grpc/grpc-java/pull/6496]

[https://search.maven.org/search?q=g:io.grpc]

 It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
version. both x86_64 and aarch64 server are building OK accroding to my testing.


> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> hadoop_build_x86_64_grpc_1.26.0.log.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing, please see the attachment, they are: log of building on aarch64, log 
> of building on x86_64, log of running tests of yarn csi on aarch64, log of 
> running tests of yarn csi on x86_64.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: (was: yarn_csi_tests_aarch64_grpc_1.26.0.log)

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> hadoop_build_x86_64_grpc_1.26.0.log.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: yarn_csi_tests_aarch64_grpc_1.26.0.log

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> hadoop_build_x86_64_grpc_1.26.0.log.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: hadoop_build_aarch64_grpc_1.26.0.log

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_aarch64_grpc_1.26.0.log, 
> hadoop_build_x86_64_grpc_1.26.0.log.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: yarn_csi_tests_x86_64_grpc_1.26.0.log
hadoop_build_x86_64_grpc_1.26.0.log.log
yarn_csi_tests_aarch64_grpc_1.26.0.log
hadoop_build_aarch64_grpc_1.26.0.log

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_x86_64_grpc_1.26.0.log.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Attachment: (was: hadoop_build_aarch64_grpc_1.26.0.log)

> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
> Attachments: hadoop_build_x86_64_grpc_1.26.0.log.log, 
> yarn_csi_tests_aarch64_grpc_1.26.0.log, yarn_csi_tests_x86_64_grpc_1.26.0.log
>
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10037) Upgrade build tools for YARN Web UI v2

2019-12-18 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16999002#comment-16999002
 ] 

Masatake Iwasaki commented on YARN-10037:
-

I tried sevral versions of Node.js. The highest version to which we can easily 
update is v8. Though [the v8 will be EoL on 
2019-12-13|https://nodejs.org/en/about/releases/], how about updating to 
Node.js v8.17.0 (latest in v8) and Yarn 1.21.1 (latest) as a first aid here? I 
attached 001 for this without clicking "Submit Patch" since it will not invoke 
{{-Pyarn-ui}} build.

> Upgrade build tools for YARN Web UI v2
> --
>
> Key: YARN-10037
> URL: https://issues.apache.org/jira/browse/YARN-10037
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, security, yarn-ui-v2
>Reporter: Akira Ajisaka
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: YARN-10037.001.patch
>
>
> The versions of the build tools are too old and have some vulnerabilities. 
> Update.
> * node: 5.12.0 (latest: 12.13.1 LTS)
> * yarn: 0.21.3 (latest: 1.12.1)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10037) Upgrade build tools for YARN Web UI v2

2019-12-18 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated YARN-10037:

Attachment: YARN-10037.001.patch

> Upgrade build tools for YARN Web UI v2
> --
>
> Key: YARN-10037
> URL: https://issues.apache.org/jira/browse/YARN-10037
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, security, yarn-ui-v2
>Reporter: Akira Ajisaka
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: YARN-10037.001.patch
>
>
> The versions of the build tools are too old and have some vulnerabilities. 
> Update.
> * node: 5.12.0 (latest: 12.13.1 LTS)
> * yarn: 0.21.3 (latest: 1.12.1)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10041) Should not use AbstractPath to create unix domain socket

2019-12-18 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16998975#comment-16998975
 ] 

Zhankun Tang commented on YARN-10041:
-

[~bzhaoopenstack], thanks for catching this. Would you like to provide a patch 
for this?

> Should not use AbstractPath to create unix domain socket
> 
>
> Key: YARN-10041
> URL: https://issues.apache.org/jira/browse/YARN-10041
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
> Environment: X86/ARM
> OS: ubuntu 1804
> java: java8
>Reporter: zhao bo
>Priority: Major
>
> This issue hits by a very coincidental scene. That 's happend when we test on 
> ARM.
> The test case is:
> org.apache.hadoop.yarn.csi.client.TestCsiClient.testIdentityService
>  
> The step is:
> If we make the hadoop source code dir to a very deep dir path, this case 
> would be pass at the first time running, but always fail in the following 
> tries.
> The official jenkins doesn't cover this, because it runs on Docker container 
> and just run test 1 time. So it looks like alway pass.
>  
> The  key point is the UNIX domain socket path exceed the limit of 
> UNIX_PATH_MAX(108). Please see [1]
>  
> This issue is very difficult to locate, as it will always return binding 
> failed when we exec the test.
>  
> Also I saw the hadoop code in trunk branch, the code use the AbsolutePath to 
> create the UNIX DOMAIN SOCKET file. The source code is [2]. So that can not 
> forbid to hit this issue. That's good to provide a second way to set the 
> socket path to '/tmp' or any place when exec this test.
> [1] 
> [https://serverfault.com/questions/641347/check-if-a-path-exceeds-maximum-for-unix-domain-socket]
> [2] 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/test/java/org/apache/hadoop/yarn/csi/client/TestCsiClient.java#L48]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated YARN-10042:

Description: 
For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
central.

see:

[https://github.com/grpc/grpc-java/pull/6496]

[https://search.maven.org/search?q=g:io.grpc]

 It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
version. both x86_64 and aarch64 server are building OK accroding to my testing.

  was:
For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
central.

see:

[https://github.com/grpc/grpc-java/pull/6496]

[https://search.maven.org/search?q=g:io.grpc]

 

It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 version. 
both x86_64 and aarch64 server are building OK accroding to my testing.


> Uupgrade grpc-xxx depdencies to 1.26.0
> --
>
> Key: YARN-10042
> URL: https://issues.apache.org/jira/browse/YARN-10042
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
>
> For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
> grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
> the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
> grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
> central.
> see:
> [https://github.com/grpc/grpc-java/pull/6496]
> [https://search.maven.org/search?q=g:io.grpc]
>  It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 
> version. both x86_64 and aarch64 server are building OK accroding to my 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10042) Uupgrade grpc-xxx depdencies to 1.26.0

2019-12-18 Thread liusheng (Jira)
liusheng created YARN-10042:
---

 Summary: Uupgrade grpc-xxx depdencies to 1.26.0
 Key: YARN-10042
 URL: https://issues.apache.org/jira/browse/YARN-10042
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: liusheng


For now, Hadoop YARN use grpc-context, grpc-core, grpc-netty, grpc-protobuf, 
grpc-protobuf-lite, grpc-stub and protoc-gen-grpc-java of version 1.15.1, but 
the "protoc-gen-grpc-java" cannot support on aarch64 platform. Now the 
grpc-java repo has support aarch64 platform and release in 1.26.0 in maven 
central.

see:

[https://github.com/grpc/grpc-java/pull/6496]

[https://search.maven.org/search?q=g:io.grpc]

 

It is better to upgrade the version of grpc-xxx dependencies to 1.26.0 version. 
both x86_64 and aarch64 server are building OK accroding to my testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10041) Should not use AbstractPath to create unix domain socket

2019-12-18 Thread zhao bo (Jira)
zhao bo created YARN-10041:
--

 Summary: Should not use AbstractPath to create unix domain socket
 Key: YARN-10041
 URL: https://issues.apache.org/jira/browse/YARN-10041
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
 Environment: X86/ARM

OS: ubuntu 1804

java: java8
Reporter: zhao bo


This issue hits by a very coincidental scene. That 's happend when we test on 
ARM.

The test case is:

org.apache.hadoop.yarn.csi.client.TestCsiClient.testIdentityService

 

The step is:

If we make the hadoop source code dir to a very deep dir path, this case would 
be pass at the first time running, but always fail in the following tries.

The official jenkins doesn't cover this, because it runs on Docker container 
and just run test 1 time. So it looks like alway pass.

 

The  key point is the UNIX domain socket path exceed the limit of 
UNIX_PATH_MAX(108). Please see [1]

 

This issue is very difficult to locate, as it will always return binding failed 
when we exec the test.

 

Also I saw the hadoop code in trunk branch, the code use the AbsolutePath to 
create the UNIX DOMAIN SOCKET file. The source code is [2]. So that can not 
forbid to hit this issue. That's good to provide a second way to set the socket 
path to '/tmp' or any place when exec this test.

[1] 
[https://serverfault.com/questions/641347/check-if-a-path-exceeds-maximum-for-unix-domain-socket]

[2] 
[https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/src/test/java/org/apache/hadoop/yarn/csi/client/TestCsiClient.java#L48]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org