[jira] [Commented] (YARN-9033) ResourceHandlerChain#bootstrap is invoked twice during NM start if LinuxContainerExecutor enabled

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16716067#comment-16716067
 ] 

Hadoop QA commented on YARN-9033:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 1 unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
2s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9033 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951298/YARN-9033-trunk.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8639a6c36685 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3ff8580 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22832/testReport/ |
| Max. process+thread count | 422 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Updated] (YARN-9033) ResourceHandlerChain#bootstrap is invoked twice during NM start if LinuxContainerExecutor enabled

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-9033:
---
Attachment: YARN-9033-trunk.002.patch

> ResourceHandlerChain#bootstrap is invoked twice during NM start if 
> LinuxContainerExecutor enabled
> -
>
> Key: YARN-9033
> URL: https://issues.apache.org/jira/browse/YARN-9033
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-9033-trunk.001.patch, YARN-9033-trunk.002.patch
>
>
> The ResourceHandlerChain#bootstrap will always be invoked in NM's 
> ContainerScheduler#serviceInit (Involved by YARN-7715)
> So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked 
> first and then invoked again in ContainerScheduler#serviceInit.
> But actually, the "updateContainer" invocation in YARN-7715 depend on 
> containerId's cgroups path creation in "preStart" method which only happens 
> when we use "LinuxContainerExecutor". So the bootstrap of 
> ResourceHandlerChain shouldn't happen in ContainerScheduler#serviceInit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9106) Add option to graceful decommission to not wait for applications

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715964#comment-16715964
 ] 

Hadoop QA commented on YARN-9106:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 20s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 34 new + 215 unchanged - 0 fixed = 249 total (was 215) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 33s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9106 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951275/YARN-9106.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  

[jira] [Commented] (YARN-9107) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715930#comment-16715930
 ] 

Hadoop QA commented on YARN-9107:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
18s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-client-check-invariants in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-client-check-test-invariants in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951287/YARN-9107.v1.patch |
| Optional Tests |  dupname  asflicense  mvnsite  unit  shadedclient  
shellcheck  shelldocs  |
| uname | Linux 17d480518900 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3ff8580 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22831/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22831/testReport/ |
| Max. process+thread count | 448 (vs. ulimit of 1) |
| modules | C: hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-check-test-invariants U: 
hadoop-client-modules |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22831/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: YARN-9107
> URL: https://issues.apache.org/jira/browse/YARN-9107
> Project: Hadoop YARN
>  Issue Type: Bug
>   

[jira] [Commented] (YARN-9051) Integrate multiple CustomResourceTypesConfigurationProvider implementations into one

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715882#comment-16715882
 ] 

Hadoop QA commented on YARN-9051:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
44s{color} | {color:green} root: The patch generated 0 new + 281 unchanged - 2 
fixed = 281 total (was 283) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m  0s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
43s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
13s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 48s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
49s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}368m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestTimelineClientV2Impl |
|   | hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9051 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-9107) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-10 Thread Brian Grunkemeyer (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Grunkemeyer updated YARN-9107:

Description: 
Building Hadoop fails on Windows due to a few shell scripts that make invalid 
assumptions:

1) Colon shouldn't be used to separate multiple paths in command line 
parameters. Colons occur in Windows paths.

2) Shell scripts that rely on running external tools need to deal with carriage 
return - line feed differences (lines ending in \r\n, not just \n)

  was:
Building Hadoop fails on Windows due to a few shell scripts that make invalid 
assumptions:

1) Colon shouldn't be used to separate file names in command line parameters. 
Colons occur in Windows path names.

2) Shell scripts that rely on running external tools need to deal with carriage 
return - line feed differences (lines ending in \r\n, not just \n)


> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: YARN-9107
> URL: https://issues.apache.org/jira/browse/YARN-9107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Attachments: YARN-9107.v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9107) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-10 Thread Brian Grunkemeyer (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Grunkemeyer updated YARN-9107:

Summary: Jar validation bash scripts don't work on Windows due to platform 
differences (colons in paths, \r\n)  (was: Bash scripts don't work on Windows 
due to platform differences (colons in paths, \r\n))

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: YARN-9107
> URL: https://issues.apache.org/jira/browse/YARN-9107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Attachments: YARN-9107.v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate file names in command line parameters. 
> Colons occur in Windows path names.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9107) Bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-10 Thread Brian Grunkemeyer (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brian Grunkemeyer updated YARN-9107:

Attachment: YARN-9107.v1.patch

> Bash scripts don't work on Windows due to platform differences (colons in 
> paths, \r\n)
> --
>
> Key: YARN-9107
> URL: https://issues.apache.org/jira/browse/YARN-9107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Attachments: YARN-9107.v1.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate file names in command line parameters. 
> Colons occur in Windows path names.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9107) Bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-10 Thread Brian Grunkemeyer (JIRA)
Brian Grunkemeyer created YARN-9107:
---

 Summary: Bash scripts don't work on Windows due to platform 
differences (colons in paths, \r\n)
 Key: YARN-9107
 URL: https://issues.apache.org/jira/browse/YARN-9107
 Project: Hadoop YARN
  Issue Type: Bug
  Components: build
Affects Versions: 3.2.0, 3.3.0
 Environment: Windows 10

Visual Studio 2017
Reporter: Brian Grunkemeyer


Building Hadoop fails on Windows due to a few shell scripts that make invalid 
assumptions:

1) Colon shouldn't be used to separate file names in command line parameters. 
Colons occur in Windows path names.

2) Shell scripts that rely on running external tools need to deal with carriage 
return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9089) Add Terminal Link to Service component instance page for UI2

2018-12-10 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715796#comment-16715796
 ] 

Eric Yang commented on YARN-9089:
-

[~akhilpb] Thank you for the review.  Agree that models are not meant for code 
execution.  The code will be revised.  Is the intend to use loader.js to set 
ENV.[namespace]=[ajax] value for YARN configuration?  Is there a convention to 
follow?  A example might help with the coding style.  Thanks.

{quote}Do we need this code? Since you are adding userInfo to 
yarn-component-instance/info route, it should be available in info.hbs page in 
model object. I am not sure passing params to outlet works or not in 
ember.\{quote}

You are right.  The parameter passing can be removed.  Thanks

> Add Terminal Link to Service component instance page for UI2
> 
>
> Key: YARN-9089
> URL: https://issues.apache.org/jira/browse/YARN-9089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Affects Versions: 3.3.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9089.001.patch
>
>
> In UI2, Service > Component > Component Instance uses Timeline server to 
> aggregate information about Service component instance.  Timeline server does 
> not have the full information like the port number of the node manager, or 
> the web protocol used by the node manager.  It requires some changes to 
> aggregate node manager information into Timeline server in order to compute 
> the Terminal link.  For reducing the scope of YARN-8914, it is better file 
> this as a separate task.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9084) Service Upgrade: With default readiness check, the status of upgrade is reported to be successful prematurely

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715702#comment-16715702
 ] 

Hadoop QA commented on YARN-9084:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 1 new + 15 unchanged - 0 fixed = 16 total (was 15) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  0s{color} 
| {color:red} hadoop-yarn-services-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.service.TestYarnNativeServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9084 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951265/YARN-9084.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a7c8e417c04b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 80e59e7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22829/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
| 

[jira] [Created] (YARN-9106) Add option to graceful decommission to not wait for applications

2018-12-10 Thread Mikayla Konst (JIRA)
Mikayla Konst created YARN-9106:
---

 Summary: Add option to graceful decommission to not wait for 
applications
 Key: YARN-9106
 URL: https://issues.apache.org/jira/browse/YARN-9106
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Reporter: Mikayla Konst


Add property 
yarn.resourcemanager.decommissioning-nodes-watcher.wait-for-applications.

If true (the default), the resource manager waits for all containers, as well 
as all applications associated with those containers, to finish before 
gracefully decommissioning a node.

If false, the resource manager only waits for containers, but not applications, 
to finish. For map-only jobs or other jobs in which mappers do not need to 
serve shuffle data, this allows nodes to be decommissioned as soon as their 
containers are finished as opposed to when the job is done.

Add property 
yarn.resourcemanager.decommissioning-nodes-watcher.wait-for-app-masters.

If false, during graceful decommission, when the resource manager waits for all 
containers on a node to finish, it will not wait for app master containers to 
finish. Defaults to true. This property should only be set to false if app 
master failure is recoverable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9075) Dynamically add or remove auxiliary services

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715607#comment-16715607
 ] 

Hadoop QA commented on YARN-9075:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 56s{color} | {color:orange} root: The patch generated 12 new + 434 unchanged 
- 17 fixed = 446 total (was 451) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
54s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-mapreduce-client-shuffle in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9075 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951248/YARN-9075.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[jira] [Updated] (YARN-9084) Service Upgrade: With default readiness check, the status of upgrade is reported to be successful prematurely

2018-12-10 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-9084:

Attachment: YARN-9084.001.patch

> Service Upgrade: With default readiness check, the status of upgrade is 
> reported to be successful prematurely
> -
>
> Key: YARN-9084
> URL: https://issues.apache.org/jira/browse/YARN-9084
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-9084.001.patch
>
>
> With YARN-9071 we do clear the IP address and Hostname from AM, NM and yarn 
> registry before upgrade. However it is observed that after the container is 
> launched again as part of reinit, the ContainerStatus received from NM has an 
> IP and host even though the container fails as soon as it is launched. 
> On Yarn Service side this results in the component instance transitioning to 
> READY state when it checks just the presence of IP address. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9084) Service Upgrade: With default readiness check, the status of upgrade is reported to be successful prematurely

2018-12-10 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-9084:

Attachment: (was: YARN-9084.001.patch)

> Service Upgrade: With default readiness check, the status of upgrade is 
> reported to be successful prematurely
> -
>
> Key: YARN-9084
> URL: https://issues.apache.org/jira/browse/YARN-9084
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> With YARN-9071 we do clear the IP address and Hostname from AM, NM and yarn 
> registry before upgrade. However it is observed that after the container is 
> launched again as part of reinit, the ContainerStatus received from NM has an 
> IP and host even though the container fails as soon as it is launched. 
> On Yarn Service side this results in the component instance transitioning to 
> READY state when it checks just the presence of IP address. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9084) Service Upgrade: With default readiness check, the status of upgrade is reported to be successful prematurely

2018-12-10 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-9084:

Attachment: YARN-9084.001.patch

> Service Upgrade: With default readiness check, the status of upgrade is 
> reported to be successful prematurely
> -
>
> Key: YARN-9084
> URL: https://issues.apache.org/jira/browse/YARN-9084
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> With YARN-9071 we do clear the IP address and Hostname from AM, NM and yarn 
> registry before upgrade. However it is observed that after the container is 
> launched again as part of reinit, the ContainerStatus received from NM has an 
> IP and host even though the container fails as soon as it is launched. 
> On Yarn Service side this results in the component instance transitioning to 
> READY state when it checks just the presence of IP address. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9084) Service Upgrade: With default readiness check, the status of upgrade is reported to be successful prematurely

2018-12-10 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-9084:

Attachment: (was: YARN-9084.001.patch)

> Service Upgrade: With default readiness check, the status of upgrade is 
> reported to be successful prematurely
> -
>
> Key: YARN-9084
> URL: https://issues.apache.org/jira/browse/YARN-9084
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> With YARN-9071 we do clear the IP address and Hostname from AM, NM and yarn 
> registry before upgrade. However it is observed that after the container is 
> launched again as part of reinit, the ContainerStatus received from NM has an 
> IP and host even though the container fails as soon as it is launched. 
> On Yarn Service side this results in the component instance transitioning to 
> READY state when it checks just the presence of IP address. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9084) Service Upgrade: With default readiness check, the status of upgrade is reported to be successful prematurely

2018-12-10 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-9084:

Attachment: YARN-9084.001.patch

> Service Upgrade: With default readiness check, the status of upgrade is 
> reported to be successful prematurely
> -
>
> Key: YARN-9084
> URL: https://issues.apache.org/jira/browse/YARN-9084
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-9084.001.patch
>
>
> With YARN-9071 we do clear the IP address and Hostname from AM, NM and yarn 
> registry before upgrade. However it is observed that after the container is 
> launched again as part of reinit, the ContainerStatus received from NM has an 
> IP and host even though the container fails as soon as it is launched. 
> On Yarn Service side this results in the component instance transitioning to 
> READY state when it checks just the presence of IP address. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715563#comment-16715563
 ] 

Hadoop QA commented on YARN-6523:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 37s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 8 new + 393 unchanged - 16 fixed = 401 total (was 409) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
44s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m  
7s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 58s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-6523 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951228/YARN-6523.011.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[jira] [Commented] (YARN-9060) [YARN-8851] Phase 1 - Support device isolation in native container-executor

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715476#comment-16715476
 ] 

Hadoop QA commented on YARN-9060:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
46m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}132m 33s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
21s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}230m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9060 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951216/YARN-9060-trunk.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux b4a7671e2282 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 17a8708 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22824/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22824/testReport/ |
| Max. process+thread count | 957 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22824/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-8851] Phase 1 - Support device isolation in native container-executor
> ---
>
> Key: YARN-9060
> URL: 

[jira] [Commented] (YARN-9087) Improve logging for initialization of Resource plugins

2018-12-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715463#comment-16715463
 ] 

Hudson commented on YARN-9087:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15582 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15582/])
YARN-9087. Improve logging for initialization of Resource plugins. (haibochen: 
rev ac578c0e82a5ba24bf59e9e58f91a3eb65c60cfe)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerChain.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/ResourcePluginManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/ResourceHandlerModule.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/GpuResourcePlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsCpuResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsMemoryResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TrafficControlBandwidthHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/DevicePluginAdapter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/DeviceResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/NetworkPacketTaggingHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsBlkioResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaResourcePlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/numa/NumaResourceHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/GpuResourceHandlerImpl.java


> Improve logging for initialization of Resource plugins
> --
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9087.001.patch, YARN-9087.002.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of 

[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-12-10 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715459#comment-16715459
 ] 

Haibo Chen commented on YARN-9008:
--

{quote}I took that from {{Client.java}}, where it's also called {{appname}}. 
Shall I re-name it anyway?
{quote}
 Let's keep it then.  I was not aware of that.
{quote}I think "--lib" would imply that we deal with jar files. Since it's a 
somewhat generic YARN application
{quote}
Fair enough.
{quote}Unfortunately that piece of code is located in a {{forEach}} lambda and 
a {{run()}} method which cannot be declared to throw {{IOException}}.
{quote}
I see. That makes sense. But we could do better with UncheckedIOException to be 
more specific. I think we can get rid of the RuntimeException outside of the 
lambda in ApplicationMaster.java too
{code:java}
    FileSystem fs;
    try {
  fs = FileSystem.get(conf);
    } catch (IOException e) {
  throw new RuntimeException("Cannot get FileSystem", e);
    }
{code}
IllegalArgumentException does not sound a good fit in Client.java when files 
are not readable or do not exist. UncheckedIOException can help here too.

 

Please also address the license issue with the two newly added text files for 
tests.

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch, YARN-9008-004.patch, YARN-9008-005.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9075) Dynamically add or remove auxiliary services

2018-12-10 Thread Billie Rinaldi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-9075:
-
Attachment: YARN-9075.003.patch

> Dynamically add or remove auxiliary services
> 
>
> Key: YARN-9075
> URL: https://issues.apache.org/jira/browse/YARN-9075
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Major
> Attachments: YARN-9075.001.patch, YARN-9075.002.patch, 
> YARN-9075.003.patch, YARN-9075_Dynamic_Aux_Services_V1.pdf
>
>
> It would be useful to support adding, removing, or updating auxiliary 
> services without requiring a restart of NMs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8738) FairScheduler should not parse negative maxResources or minResources values as positive

2018-12-10 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715424#comment-16715424
 ] 

Hudson commented on YARN-8738:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15581 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15581/])
YARN-8738. FairScheduler should not parse negative maxResources or (haibochen: 
rev 64411a6ff74ef87756aae12224ff9c25f7e2a143)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerConfiguration.java


> FairScheduler should not parse negative maxResources or minResources values 
> as positive
> ---
>
> Key: YARN-8738
> URL: https://issues.apache.org/jira/browse/YARN-8738
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Sen Zhao
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8738.001.patch, YARN-8738.002.patch, 
> YARN-8738.003.patch
>
>
> If maxResources or minResources is configured as a negative number, the value 
> will be positive after parsing.
> If this is a problem, I will fix it. If not, the 
> FairSchedulerConfiguration#parseNewStyleResource parse negative number should 
> be same with parseOldStyleResource .
> cc:[~templedf], [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9087) Improve logging for initialization of Resource plugins

2018-12-10 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-9087:
-
Fix Version/s: 3.3.0

> Improve logging for initialization of Resource plugins
> --
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9087.001.patch, YARN-9087.002.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of resource plugins in 
> {{ResourcePluginManager#initialize}}
> - Added toString to {{ResourceHandlerChain}}
> - Added toString to all implementations to subclasses of {{ResourcePlugin}} 
> as they are printed in {{ResourcePluginManager#initialize}}
> - Added toString to all implementations to subclasses of {{ResourceHandler}} 
> as they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9087) Better logging for initialization of Resource plugins

2018-12-10 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715401#comment-16715401
 ] 

Haibo Chen commented on YARN-9087:
--

+1 on the latest patch.

> Better logging for initialization of Resource plugins
> -
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9087.001.patch, YARN-9087.002.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of resource plugins in 
> {{ResourcePluginManager#initialize}}
> - Added toString to {{ResourceHandlerChain}}
> - Added toString to all implementations to subclasses of {{ResourcePlugin}} 
> as they are printed in {{ResourcePluginManager#initialize}}
> - Added toString to all implementations to subclasses of {{ResourceHandler}} 
> as they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9087) Improve logging for initialization of Resource plugins

2018-12-10 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-9087:
-
Summary: Improve logging for initialization of Resource plugins  (was: 
Better logging for initialization of Resource plugins)

> Improve logging for initialization of Resource plugins
> --
>
> Key: YARN-9087
> URL: https://issues.apache.org/jira/browse/YARN-9087
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9087.001.patch, YARN-9087.002.patch
>
>
> The patch includes the following enahncements for logging: 
> - Logging initializer code of resource handlers in 
> {{LinuxContainerExecutor#init}}
> - Logging initializer code of resource plugins in 
> {{ResourcePluginManager#initialize}}
> - Added toString to {{ResourceHandlerChain}}
> - Added toString to all implementations to subclasses of {{ResourcePlugin}} 
> as they are printed in {{ResourcePluginManager#initialize}}
> - Added toString to all implementations to subclasses of {{ResourceHandler}} 
> as they are printed as a field of the {{LinuxContainerExecutor#init}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8738) FairScheduler configures maxResources or minResources as negative, the value parse to a positive number.

2018-12-10 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715375#comment-16715375
 ] 

Haibo Chen commented on YARN-8738:
--

+1 on the latest patch. Checking it in shortly.

> FairScheduler configures maxResources or minResources as negative, the value 
> parse to a positive number.
> 
>
> Key: YARN-8738
> URL: https://issues.apache.org/jira/browse/YARN-8738
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Sen Zhao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8738.001.patch, YARN-8738.002.patch, 
> YARN-8738.003.patch
>
>
> If maxResources or minResources is configured as a negative number, the value 
> will be positive after parsing.
> If this is a problem, I will fix it. If not, the 
> FairSchedulerConfiguration#parseNewStyleResource parse negative number should 
> be same with parseOldStyleResource .
> cc:[~templedf], [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8738) FairScheduler should not parse negative maxResources or minResources values as positive

2018-12-10 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8738:
-
Summary: FairScheduler should not parse negative maxResources or 
minResources values as positive  (was: FairScheduler configures maxResources or 
minResources as negative, the value parse to a positive number.)

> FairScheduler should not parse negative maxResources or minResources values 
> as positive
> ---
>
> Key: YARN-8738
> URL: https://issues.apache.org/jira/browse/YARN-8738
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.2.0
>Reporter: Sen Zhao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8738.001.patch, YARN-8738.002.patch, 
> YARN-8738.003.patch
>
>
> If maxResources or minResources is configured as a negative number, the value 
> will be positive after parsing.
> If this is a problem, I will fix it. If not, the 
> FairSchedulerConfiguration#parseNewStyleResource parse negative number should 
> be same with parseOldStyleResource .
> cc:[~templedf], [~leftnoteasy]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9033) ResourceHandlerChain#bootstrap is invoked twice during NM start if LinuxContainerExecutor enabled

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715350#comment-16715350
 ] 

Hadoop QA commented on YARN-9033:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
50s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9033 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951229/YARN-9033-trunk.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0ffc8913cfb4 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 17a8708 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22826/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22826/testReport/ |
| Max. process+thread count | 308 

[jira] [Updated] (YARN-9084) Service Upgrade: With default readiness check, the status of upgrade is reported to be successful prematurely

2018-12-10 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-9084:

Description: 
With YARN-9071 we do clear the IP address and Hostname from AM, NM and yarn 
registry before upgrade. However it is observed that after the container is 
launched again as part of reinit, the ContainerStatus received from NM has an 
IP and host even though the container fails as soon as it is launched. 

On Yarn Service side this results in the component instance transitioning to 
READY state when it checks just the presence of IP address. 



> Service Upgrade: With default readiness check, the status of upgrade is 
> reported to be successful prematurely
> -
>
> Key: YARN-9084
> URL: https://issues.apache.org/jira/browse/YARN-9084
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> With YARN-9071 we do clear the IP address and Hostname from AM, NM and yarn 
> registry before upgrade. However it is observed that after the container is 
> launched again as part of reinit, the ContainerStatus received from NM has an 
> IP and host even though the container fails as soon as it is launched. 
> On Yarn Service side this results in the component instance transitioning to 
> READY state when it checks just the presence of IP address. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9033) ResourceHandlerChain#bootstrap is invoked twice during NM start if LinuxContainerExecutor enabled

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-9033:
---
Attachment: YARN-9033-trunk.001.patch

> ResourceHandlerChain#bootstrap is invoked twice during NM start if 
> LinuxContainerExecutor enabled
> -
>
> Key: YARN-9033
> URL: https://issues.apache.org/jira/browse/YARN-9033
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-9033-trunk.001.patch
>
>
> The ResourceHandlerChain#bootstrap will always be invoked in NM's 
> ContainerScheduler#serviceInit (Involved by YARN-7715)
> So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked 
> first and then invoked again in ContainerScheduler#serviceInit.
> But actually, the "updateContainer" invocation in YARN-7715 depend on 
> containerId's cgroups path creation in "preStart" method which only happens 
> when we use "LinuxContainerExecutor". So the bootstrap of 
> ResourceHandlerChain shouldn't happen in ContainerScheduler#serviceInit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9033) ResourceHandlerChain#bootstrap is invoked twice during NM start if LinuxContainerExecutor enabled

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-9033:
---
Description: 
The ResourceHandlerChain#bootstrap will always be invoked in NM's 
ContainerScheduler#serviceInit (Involved by YARN-7715)

So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked first 
and then invoked again in ContainerScheduler#serviceInit.

But actually, the "updateContainer" invocation in YARN-7715 depend on 
containerId's cgroups path creation in "preStart" method which only happens 
when we use "LinuxContainerExecutor". So the bootstrap of ResourceHandlerChain 
shouldn't happen in ContainerScheduler#serviceInit.

  was:
The ResourceHandlerChain#bootstrap will always be invoked in NM's 
ContainerScheduler#serviceInit (Involved by YARN-7715)

So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked first 
and then invoked again in ContainerScheduler#serviceInit.

But in the other hand, the "updateContainer" invocation in YARN-7715 depend on 
containerId's cgroups path creation in "preStart" method which only happens 
when we use "LinuxContainerExecutor". So the bootstrap of ResourceHandlerChain 
shouldn't happen in ContainerScheduler#serviceInit.


> ResourceHandlerChain#bootstrap is invoked twice during NM start if 
> LinuxContainerExecutor enabled
> -
>
> Key: YARN-9033
> URL: https://issues.apache.org/jira/browse/YARN-9033
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-9033-trunk.001.patch
>
>
> The ResourceHandlerChain#bootstrap will always be invoked in NM's 
> ContainerScheduler#serviceInit (Involved by YARN-7715)
> So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked 
> first and then invoked again in ContainerScheduler#serviceInit.
> But actually, the "updateContainer" invocation in YARN-7715 depend on 
> containerId's cgroups path creation in "preStart" method which only happens 
> when we use "LinuxContainerExecutor". So the bootstrap of 
> ResourceHandlerChain shouldn't happen in ContainerScheduler#serviceInit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-12-10 Thread Manikandan R (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-6523:
---
Attachment: YARN-6523.011.patch

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2018-12-10 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715170#comment-16715170
 ] 

Manikandan R commented on YARN-6523:


Addressed checkstyle and javadoc issues except 1 issue. Couldn't find a way to 
fix "Line is longer than 80 characters" warnings for field declarations. For 
ex, {{private final ConcurrentMap 
systemCredentials}} is exceeding max length. I am using Neon.3 Release (4.6.3). 

> Newly retrieved security Tokens are sent as part of each heartbeat to each 
> node from RM which is not desirable in large cluster
> ---
>
> Key: YARN-6523
> URL: https://issues.apache.org/jira/browse/YARN-6523
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: RM
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Naganarasimha G R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-6523.001.patch, YARN-6523.002.patch, 
> YARN-6523.003.patch, YARN-6523.004.patch, YARN-6523.005.patch, 
> YARN-6523.006.patch, YARN-6523.007.patch, YARN-6523.008.patch, 
> YARN-6523.009.patch, YARN-6523.010.patch, YARN-6523.011.patch
>
>
> Currently as part of heartbeat response RM sets all application's tokens 
> though all applications might not be active on the node. On top of it 
> NodeHeartbeatResponsePBImpl converts tokens for each app into 
> SystemCredentialsForAppsProto. Hence for each node and each heartbeat too 
> many SystemCredentialsForAppsProto objects were getting created.
> We hit a OOM while testing for 2000 concurrent apps on 500 nodes cluster with 
> 8GB RAM configured for RM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7746) Fix PlacementProcessor to support app priority

2018-12-10 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16715147#comment-16715147
 ] 

Manikandan R commented on YARN-7746:


Ok [~cheersyang]. Thanks. Can we define new property and use it rather than 
using "app priority" property? Would it help users to differentiate the 
context? Please suggest.

> Fix PlacementProcessor to support app priority
> --
>
> Key: YARN-7746
> URL: https://issues.apache.org/jira/browse/YARN-7746
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-7746.001.patch, YARN-7746.002.patch
>
>
> The Threadpools used in the Processor should be modified to take a priority 
> blocking queue that respects application priority.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9033) ResourceHandlerChain#bootstrap is invoked twice during NM start if LinuxContainerExecutor enabled

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-9033:
---
Description: 
The ResourceHandlerChain#bootstrap will always be invoked in NM's 
ContainerScheduler#serviceInit (Involved by YARN-7715)

So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked first 
and then invoked again in ContainerScheduler#serviceInit.

But in the other hand, the "updateContainer" invocation in YARN-7715 depend on 
containerId's cgroups path creation in "preStart" method which only happens 
when we use "LinuxContainerExecutor". So the bootstrap of ResourceHandlerChain 
shouldn't happen in ContainerScheduler#serviceInit.

  was:
The ResourceHandlerChain#bootstrap will always be invoked in NM's 
ContainerScheduler#serviceInit (Involved by YARN-7715)

So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked first 
and then invoked again in ContainerScheduler#serviceInit.

But in the other hand, the "updateContainer" invocation in YARN-7715 depend on 
containerId's cgroups path creation in "preStart" method which only happens 
when we use "LinuxContainerExecutor". So the bootstrap of ResourceHandlerChain 
shoudn't happen in ContainerScheduler#serviceInit.


> ResourceHandlerChain#bootstrap is invoked twice during NM start if 
> LinuxContainerExecutor enabled
> -
>
> Key: YARN-9033
> URL: https://issues.apache.org/jira/browse/YARN-9033
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> The ResourceHandlerChain#bootstrap will always be invoked in NM's 
> ContainerScheduler#serviceInit (Involved by YARN-7715)
> So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked 
> first and then invoked again in ContainerScheduler#serviceInit.
> But in the other hand, the "updateContainer" invocation in YARN-7715 depend 
> on containerId's cgroups path creation in "preStart" method which only 
> happens when we use "LinuxContainerExecutor". So the bootstrap of 
> ResourceHandlerChain shouldn't happen in ContainerScheduler#serviceInit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9033) ResourceHandlerChain#bootstrap is invoked twice during NM start if LinuxContainerExecutor enabled

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-9033:
---
Description: 
The ResourceHandlerChain#bootstrap will always be invoked in NM's 
ContainerScheduler#serviceInit (Involved by YARN-7715)

So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked first 
and then invoked again in ContainerScheduler#serviceInit.

But in the other hand, the "updateContainer" invocation in YARN-7715 depend on 
containerId's cgroups path creation in "preStart" method which only happens 
when we use "LinuxContainerExecutor". So the bootstrap of ResourceHandlerChain 
shoudn't happen in ContainerScheduler#serviceInit.

  was:
The ResourceHandlerChain#bootstrap will always be invoked in NM's 
ContainerScheduler#serviceInit (Involved by YARN-7715)

So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked first 
and then invoked again in ContainerScheduler#serviceInit.

But in the other hand, the "updateContainer" invocation in YARN-7715 depend on 
containerId's cgroups path creation in "preStart" method which only happens 
when we use "LinuxContainerExecutor". So the bootstrap of ResourceHandlerChain 
shoudn't happen in 


> ResourceHandlerChain#bootstrap is invoked twice during NM start if 
> LinuxContainerExecutor enabled
> -
>
> Key: YARN-9033
> URL: https://issues.apache.org/jira/browse/YARN-9033
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> The ResourceHandlerChain#bootstrap will always be invoked in NM's 
> ContainerScheduler#serviceInit (Involved by YARN-7715)
> So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked 
> first and then invoked again in ContainerScheduler#serviceInit.
> But in the other hand, the "updateContainer" invocation in YARN-7715 depend 
> on containerId's cgroups path creation in "preStart" method which only 
> happens when we use "LinuxContainerExecutor". So the bootstrap of 
> ResourceHandlerChain shoudn't happen in ContainerScheduler#serviceInit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9033) ResourceHandlerChain#bootstrap is invoked twice during NM start if LinuxContainerExecutor enabled

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-9033:
---
Description: 
The ResourceHandlerChain#bootstrap will always be invoked in NM's 
ContainerScheduler#serviceInit (Involved by YARN-7715)

So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked first 
and then invoked again in ContainerScheduler#serviceInit.

But in the other hand, the "updateContainer" invocation in YARN-7715 depend on 
containerId's cgroups path creation in "preStart" method which only happens 
when we use "LinuxContainerExecutor". So the bootstrap of ResourceHandlerChain 
shoudn't happen in 

  was:
The ResourceHandlerChain#bootstrap will always be invoked in NM's 
ContainerScheduler#serviceInit (Involved by YARN-7715)

So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked first 
and then invoked again in ContainerScheduler#serviceInit.


> ResourceHandlerChain#bootstrap is invoked twice during NM start if 
> LinuxContainerExecutor enabled
> -
>
> Key: YARN-9033
> URL: https://issues.apache.org/jira/browse/YARN-9033
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> The ResourceHandlerChain#bootstrap will always be invoked in NM's 
> ContainerScheduler#serviceInit (Involved by YARN-7715)
> So if LCE is enabled, the ResourceHandlerChain#bootstrap will be invoked 
> first and then invoked again in ContainerScheduler#serviceInit.
> But in the other hand, the "updateContainer" invocation in YARN-7715 depend 
> on containerId's cgroups path creation in "preStart" method which only 
> happens when we use "LinuxContainerExecutor". So the bootstrap of 
> ResourceHandlerChain shoudn't happen in 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9060) [YARN-8851] Phase 1 - Support device isolation in native container-executor

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-9060:
---
Attachment: YARN-9060-trunk.005.patch

> [YARN-8851] Phase 1 - Support device isolation in native container-executor
> ---
>
> Key: YARN-9060
> URL: https://issues.apache.org/jira/browse/YARN-9060
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-9060-trunk.001.patch, YARN-9060-trunk.002.patch, 
> YARN-9060-trunk.003.patch, YARN-9060-trunk.004.patch, 
> YARN-9060-trunk.005.patch
>
>
> Due to the cgroups v1 implementation policy in linux kernel, we cannot update 
> the value of the device cgroups controller unless we have the root permission 
> ([here|https://github.com/torvalds/linux/blob/6f0d349d922ba44e4348a17a78ea51b7135965b1/security/device_cgroup.c#L604]).
>  So we need to support this in container-executor for Java layer to invoke.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714921#comment-16714921
 ] 

Hadoop QA commented on YARN-9008:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  4s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9008 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951209/YARN-9008-005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 225b07fc8912 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 17a8708 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22823/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22823/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/22823/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 651 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 U: 

[jira] [Commented] (YARN-9037) [CSI] Ignore volume resource in resource calculators based on tags

2018-12-10 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714914#comment-16714914
 ] 

Weiwei Yang commented on YARN-9037:
---

Hi [~sunilg]

Latest patch looks good to me, I'll do some testing on a cluster to verify. 
Thanks a lot.

> [CSI] Ignore volume resource in resource calculators based on tags
> --
>
> Key: YARN-9037
> URL: https://issues.apache.org/jira/browse/YARN-9037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Sunil Govindan
>Priority: Major
>  Labels: CSI
> Attachments: YARN-9037-002.patch, YARN-9037.001.patch, 
> YARN-9037.003.patch
>
>
> The pre-provisioned volume is specified as a resource, but such resource is 
> different comparing to what is managed now in YARN, e.g memory, vcores. They 
> are constrained by 3rd party storage systems, so it looks more like an 
> unmanaged resource. In such case, we need to ignore the resource calculation 
> over them in the resource calculators. This can be done based on the resource 
> tags.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-12-10 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714809#comment-16714809
 ] 

Peter Bacsko commented on YARN-9008:


I uploaded patch v5 in the meantime.

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch, YARN-9008-004.patch, YARN-9008-005.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-12-10 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9008:
---
Attachment: YARN-9008-005.patch

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch, YARN-9008-004.patch, YARN-9008-005.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-12-10 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714776#comment-16714776
 ] 

Peter Bacsko commented on YARN-9008:


{quote}We are missing one unit test for upload a non-existent file and one for 
a directory{quote}
Added test.

{quote}The new commandline option 'appname' should probably be renamed to 
'app_name' for the sake of consistency with other options{quote}
I took that from {{Client.java}}, where it's also called {{appname}}. Shall I 
re-name it anyway?

{quote}All IOExceptions are wrapped in a RunTimeException. But I am not sure 
why benefits it provides than just directly throwing IOException{quote}

Unfortunately that piece of code is located in a {{forEach}} lambda and a 
{{run()}} method which cannot be declared to throw {{IOException}}. So that's 
why I had to wrap it.

{quote}Can we centralize them in one place?{quote}
I added a public static method to {{ApplicationMaster}}.

{quote}What do you think of renaming it to 'lib'?{quote}
I think "--lib" would imply that we deal with jar files. Since it's a somewhat 
generic YARN application, I think it's OK to keep this switch.

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch, YARN-9008-004.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9008) Extend YARN distributed shell with file localization feature

2018-12-10 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9008:
---
Affects Version/s: (was: 2.9.1)

> Extend YARN distributed shell with file localization feature
> 
>
> Key: YARN-9008
> URL: https://issues.apache.org/jira/browse/YARN-9008
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9008-001.patch, YARN-9008-002.patch, 
> YARN-9008-003.patch, YARN-9008-004.patch
>
>
> YARN distributed shell is a very handy tool to test various features of YARN.
> However, it lacks support for file localization - that is, you define files 
> in the command line that you wish to be localized remotely. This can be 
> extremely useful in certain scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9060) [YARN-8851] Phase 1 - Support device isolation in native container-executor

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714729#comment-16714729
 ] 

Hadoop QA commented on YARN-9060:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
50m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  8m 35s{color} | 
{color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}142m  8s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
29s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}246m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9060 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951169/YARN-9060-trunk.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux b2f92d2e4b57 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 17a8708 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/22821/artifact/out/diff-compile-cc-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/22821/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22821/testReport/ |
| Max. process+thread count | 899 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22821/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-8851] Phase 1 - Support device isolation in 

[jira] [Commented] (YARN-8617) Aggregated Application Logs accumulates for long running jobs

2018-12-10 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714696#comment-16714696
 ] 

Bibin A Chundatt commented on YARN-8617:


[~Prabhu Joseph]

Looked into the issue again ..YARN-2583 contains 2 parts 

# limit number of files per node
  public static final String NM_LOG_AGGREGATION_NUM_LOG_FILES_SIZE_PER_APP
  = NM_PREFIX + "log-aggregation.num-log-files-per-app";
# Delete files old than expiry time.

{code}
if (appDir.isDirectory() &&
appDir.getModificationTime() < cutoffMillis) {
{code}

{quote}
The AggrgeatedLogDeletionService does deletion for Running Job based upon the 
file modification time which always will be latest as the rolled logs are 
getting updated into the node1 file regularly
{quote}

For long running service the *application folder* eg 
:user/logs/application_1234 modification time gets updated on every upload 
cycle.
This could cause nodefile to remain in hdfs if no new containers are allocated 
to same node.



> Aggregated Application Logs accumulates for long running jobs
> -
>
> Key: YARN-8617
> URL: https://issues.apache.org/jira/browse/YARN-8617
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: log-aggregation
>Affects Versions: 2.7.4
>Reporter: Prabhu Joseph
>Priority: Major
>
> Currently AggregationDeletionService will delete older aggregated log files 
> once when they are complete. This will cause logs to accumulate for Long 
> Running Jobs like Llap, Spark Streaming.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9037) [CSI] Ignore volume resource in resource calculators based on tags

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714652#comment-16714652
 ] 

Hadoop QA commented on YARN-9037:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 29s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 98 unchanged - 0 fixed = 101 total (was 98) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
48s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m  1s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951166/YARN-9037.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 0c0eae9c8869 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 

[jira] [Updated] (YARN-9105) [UI2] Application master container resource information should be displayed in UI2

2018-12-10 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9105:
---
Component/s: yarn-ui-v2

> [UI2] Application master container resource information should be displayed 
> in UI2
> --
>
> Key: YARN-9105
> URL: https://issues.apache.org/jira/browse/YARN-9105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9105) [UI2] Application master container resource information should be displayed in UI2

2018-12-10 Thread Akhil PB (JIRA)
Akhil PB created YARN-9105:
--

 Summary: [UI2] Application master container resource information 
should be displayed in UI2
 Key: YARN-9105
 URL: https://issues.apache.org/jira/browse/YARN-9105
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Akhil PB
Assignee: Akhil PB






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9099) GpuResourceAllocator.getReleasingGpus calculates number of GPUs in a wrong way

2018-12-10 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714573#comment-16714573
 ] 

Hadoop QA commented on YARN-9099:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 43s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | YARN-9099 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951132/YARN-9099.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9410d06631e4 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 17a8708 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22822/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 

[jira] [Resolved] (YARN-9104) Fix the bug in DeviceMappingManager#getReleasingDevices

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang resolved YARN-9104.

Resolution: Duplicate

Resolve this due to JIRA's duplicated the creation

> Fix the bug in DeviceMappingManager#getReleasingDevices
> ---
>
> Key: YARN-9104
> URL: https://issues.apache.org/jira/browse/YARN-9104
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
>
> When one container is assigned with multiple devices and in releasing state. 
> This same containerId looping causes multiple times releasing device count 
> sum. It involved a bug which is the same as mentioned in YARN-9099.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9103) Fix the bug in DeviceMappingManager#getReleasingDevices

2018-12-10 Thread Zhankun Tang (JIRA)
Zhankun Tang created YARN-9103:
--

 Summary: Fix the bug in DeviceMappingManager#getReleasingDevices
 Key: YARN-9103
 URL: https://issues.apache.org/jira/browse/YARN-9103
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhankun Tang
Assignee: Zhankun Tang


When one container is assigned with multiple devices and in releasing state. 
This same containerId looping causes multiple times releasing device count sum. 
It involved a bug which is the same as mentioned in YARN-9099.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9104) Fix the bug in DeviceMappingManager#getReleasingDevices

2018-12-10 Thread Zhankun Tang (JIRA)
Zhankun Tang created YARN-9104:
--

 Summary: Fix the bug in DeviceMappingManager#getReleasingDevices
 Key: YARN-9104
 URL: https://issues.apache.org/jira/browse/YARN-9104
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhankun Tang
Assignee: Zhankun Tang


When one container is assigned with multiple devices and in releasing state. 
This same containerId looping causes multiple times releasing device count sum. 
It involved a bug which is the same as mentioned in YARN-9099.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9102) Log Aggregation is failing with S3A FileSystem for IFile Format

2018-12-10 Thread VAMSHI KRISHNA (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

VAMSHI KRISHNA updated YARN-9102:
-
Summary: Log Aggregation is failing with S3A FileSystem for IFile Format  
(was: Log Aggregation is failing with S3A/OBS FileSystem for IFile Format)

> Log Aggregation is failing with S3A FileSystem for IFile Format
> ---
>
> Key: YARN-9102
> URL: https://issues.apache.org/jira/browse/YARN-9102
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation, nodemanager, resourcemanager, yarn
>Affects Versions: 3.1.1
>Reporter: VAMSHI KRISHNA
>Priority: Major
>
> Log aggregation for application is failing in hadoop when we configure Index 
> file format with S3A as filesystem. In nodemanager logs, its showing 
> FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9102) Log Aggregation is failing with S3A/OBS FileSystem for IFile Format

2018-12-10 Thread VAMSHI KRISHNA (JIRA)
VAMSHI KRISHNA created YARN-9102:


 Summary: Log Aggregation is failing with S3A/OBS FileSystem for 
IFile Format
 Key: YARN-9102
 URL: https://issues.apache.org/jira/browse/YARN-9102
 Project: Hadoop YARN
  Issue Type: Bug
  Components: log-aggregation, nodemanager, resourcemanager, yarn
Affects Versions: 3.1.1
Reporter: VAMSHI KRISHNA


Log aggregation for application is failing in hadoop when we configure Index 
file format with S3A as filesystem. In nodemanager logs, its showing 
FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9099) GpuResourceAllocator.getReleasingGpus calculates number of GPUs in a wrong way

2018-12-10 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714513#comment-16714513
 ] 

Zhankun Tang commented on YARN-9099:


[~snemeth], Thanks for catching up this! The patch looks good to me. And a test 
case would be better.

> GpuResourceAllocator.getReleasingGpus calculates number of GPUs in a wrong way
> --
>
> Key: YARN-9099
> URL: https://issues.apache.org/jira/browse/YARN-9099
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-9099.001.patch
>
>
> getReleasingGpus plays an important role in the calculation which happens 
> when GpuAllocator assign GPUs to a container, see: 
> GpuResourceAllocator#internalAssignGpus.
> If multiple GPUs are assigned to the same container, getReleasingGpus will 
> return an invalid number.
> The iterator goes over on mappings of (GPU device, container ID) and it 
> retrieves the container by its ID the number of times the container ID is 
> mapped to any device.
> Then for every container, the resource value for the GPU resource is added to 
> a running sum.
> Obviously, if a container is mapped to 2 or more devices, then the 
> container's GPU resource counter is added to the running sum as many times as 
> the number of GPU devices the container has.
> Example: 
> Let's suppose {{usedDevices}} contains these mappings: 
> - (GPU1, container1)
> - (GPU2, container1)
> - (GPU3, container2)
> GPU resource value is 2 for container1 and 
> GPU resource value is 1 for container2.
> Then, if container1 is in a running state, getReleasingGpus will return 4 
> instead of 2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9060) [YARN-8851] Phase 1 - Support device isolation in native container-executor

2018-12-10 Thread Zhankun Tang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated YARN-9060:
---
Attachment: YARN-9060-trunk.004.patch

> [YARN-8851] Phase 1 - Support device isolation in native container-executor
> ---
>
> Key: YARN-9060
> URL: https://issues.apache.org/jira/browse/YARN-9060
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>Priority: Major
> Attachments: YARN-9060-trunk.001.patch, YARN-9060-trunk.002.patch, 
> YARN-9060-trunk.003.patch, YARN-9060-trunk.004.patch
>
>
> Due to the cgroups v1 implementation policy in linux kernel, we cannot update 
> the value of the device cgroups controller unless we have the root permission 
> ([here|https://github.com/torvalds/linux/blob/6f0d349d922ba44e4348a17a78ea51b7135965b1/security/device_cgroup.c#L604]).
>  So we need to support this in container-executor for Java layer to invoke.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9037) [CSI] Ignore volume resource in resource calculators based on tags

2018-12-10 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714449#comment-16714449
 ] 

Sunil Govindan commented on YARN-9037:
--

Ideally we let all creation of resources to have any types of resources added 
in the cluster. Hence getResourceTypesArray usage is fine.

We just focus on the ops method where we ensure that resource types with tags 
named "system:csi-volume" is not considered to be a resource to be considered 
as countable.

Added one test case to verify the change is fine in simpler level.

> [CSI] Ignore volume resource in resource calculators based on tags
> --
>
> Key: YARN-9037
> URL: https://issues.apache.org/jira/browse/YARN-9037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Sunil Govindan
>Priority: Major
>  Labels: CSI
> Attachments: YARN-9037-002.patch, YARN-9037.001.patch, 
> YARN-9037.003.patch
>
>
> The pre-provisioned volume is specified as a resource, but such resource is 
> different comparing to what is managed now in YARN, e.g memory, vcores. They 
> are constrained by 3rd party storage systems, so it looks more like an 
> unmanaged resource. In such case, we need to ignore the resource calculation 
> over them in the resource calculators. This can be done based on the resource 
> tags.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9037) [CSI] Ignore volume resource in resource calculators based on tags

2018-12-10 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714450#comment-16714450
 ] 

Sunil Govindan commented on YARN-9037:
--

[~cheersyang] cud u pls check

> [CSI] Ignore volume resource in resource calculators based on tags
> --
>
> Key: YARN-9037
> URL: https://issues.apache.org/jira/browse/YARN-9037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Sunil Govindan
>Priority: Major
>  Labels: CSI
> Attachments: YARN-9037-002.patch, YARN-9037.001.patch, 
> YARN-9037.003.patch
>
>
> The pre-provisioned volume is specified as a resource, but such resource is 
> different comparing to what is managed now in YARN, e.g memory, vcores. They 
> are constrained by 3rd party storage systems, so it looks more like an 
> unmanaged resource. In such case, we need to ignore the resource calculation 
> over them in the resource calculators. This can be done based on the resource 
> tags.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9037) [CSI] Ignore volume resource in resource calculators based on tags

2018-12-10 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-9037:
-
Attachment: YARN-9037.003.patch

> [CSI] Ignore volume resource in resource calculators based on tags
> --
>
> Key: YARN-9037
> URL: https://issues.apache.org/jira/browse/YARN-9037
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Sunil Govindan
>Priority: Major
>  Labels: CSI
> Attachments: YARN-9037-002.patch, YARN-9037.001.patch, 
> YARN-9037.003.patch
>
>
> The pre-provisioned volume is specified as a resource, but such resource is 
> different comparing to what is managed now in YARN, e.g memory, vcores. They 
> are constrained by 3rd party storage systems, so it looks more like an 
> unmanaged resource. In such case, we need to ignore the resource calculation 
> over them in the resource calculators. This can be done based on the resource 
> tags.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7715) Support NM promotion/demotion of running containers.

2018-12-10 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16714443#comment-16714443
 ] 

Zhankun Tang commented on YARN-7715:


[~miklos.szeg...@cloudera.com], [~asuresh],
Is this JIRA depend on YARN-5085? Why YARN-5085 is merged into branch 2.9.0 and 
3.0.0 but this JIRA is merged into branch 3.2.0?

> Support NM promotion/demotion of running containers.
> 
>
> Key: YARN-7715
> URL: https://issues.apache.org/jira/browse/YARN-7715
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7715.000.patch, YARN-7715.001.patch, 
> YARN-7715.002.patch, YARN-7715.003.patch, YARN-7715.004.patch
>
>
> In YARN-6673 and YARN-6674, the cgroups resource handlers update the cgroups 
> params for the containers, based on opportunistic or guaranteed, in the 
> *preStart* method.
> Now that YARN-5085 is in, Container executionType (as well as the cpu, memory 
> and any other resources) can be updated after the container has started. This 
> means we need the ability to change cgroups params after container start.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org