[jira] [Updated] (YARN-7750) Render time in the users timezone

2018-01-16 Thread Vasudevan Skm (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasudevan Skm updated YARN-7750:

Attachment: YARN-7750.001.patch

> Render time in the users timezone
> -
>
> Key: YARN-7750
> URL: https://issues.apache.org/jira/browse/YARN-7750
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
> Environment: Render time in the users timezone/ a predefined TZ
>  
>Reporter: Vasudevan Skm
>Assignee: Vasudevan Skm
>Priority: Major
> Attachments: YARN-7750.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6856) Support CLI for Node Attributes Mapping

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328410#comment-16328410
 ] 

genericqa commented on YARN-6856:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}120m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
58s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
50s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
58s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
0s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 24 new + 26 unchanged 
- 0 fixed = 50 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
23s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 26s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 46s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 
16s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
43s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}354m 14s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328401#comment-16328401
 ] 

genericqa commented on YARN-7757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3409 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 304 unchanged - 16 fixed = 304 total (was 320) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
41s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
37s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7757 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906344/YARN-7757-YARN-3409.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 

[jira] [Commented] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328403#comment-16328403
 ] 

genericqa commented on YARN-3841:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  2m 
57s{color} | {color:red} Docker failed to build yetus/hadoop:0de40f0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3841 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906356/YARN-3841-YARN-7055.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19279/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, 
> YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-16 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-3841:

Attachment: YARN-3841-YARN-7055.002.patch

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, 
> YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7761) [UI2]Clicking 'master container log' or 'Link' next to 'log' under application's appAttempt goes to Old UI's Log link

2018-01-16 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reopened YARN-7761:
---

> [UI2]Clicking 'master container log' or 'Link' next to 'log' under 
> application's appAttempt goes to Old UI's Log link
> -
>
> Key: YARN-7761
> URL: https://issues.apache.org/jira/browse/YARN-7761
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Vasudevan Skm
>Priority: Major
>
> Clicking 'master container log' or 'Link' next to 'Log' under application's 
> appAttempt goes to Old UI's Log link



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-16 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328379#comment-16328379
 ] 

Abhishek Modi commented on YARN-3841:
-

Build failed because of 
Docker failed to build image.
 

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328373#comment-16328373
 ] 

genericqa commented on YARN-6619:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 68m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
35s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  5s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 21 new + 202 unchanged - 11 fixed = 223 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m  
6s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 47s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}246m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6619 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906320/YARN-6619-YARN-6592.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 933eb615219c 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / f476b83 |
| maven | 

[jira] [Commented] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328370#comment-16328370
 ] 

genericqa commented on YARN-3841:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  6m 
34s{color} | {color:red} Docker failed to build yetus/hadoop:0de40f0. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-3841 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906351/YARN-3841-YARN-7055.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19278/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-16 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328362#comment-16328362
 ] 

Abhishek Modi commented on YARN-3841:
-

Attached a new patch on top of YARN-7055 for this.

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328359#comment-16328359
 ] 

Arun Suresh commented on YARN-6619:
---

As I mentioned earlier, the test case failure is unrelated.

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch, 
> YARN-6619-YARN-6592.004.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-01-16 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-3841:

Attachment: YARN-3841-YARN-7055.002.patch

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328351#comment-16328351
 ] 

genericqa commented on YARN-6619:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 75m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
45s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 21 new + 202 unchanged - 11 fixed = 223 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 
53s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 30s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}239m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6619 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906318/YARN-6619-YARN-6592.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2f7ce7cdee35 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / f476b83 |
| maven | version: 

[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328331#comment-16328331
 ] 

genericqa commented on YARN-6619:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 71m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 52m 
34s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
24s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 18 new + 202 unchanged - 11 fixed = 220 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 14s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}270m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
|   | hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6619 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906145/YARN-6619-YARN-6592.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4f32d6b7971f 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-16 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328322#comment-16328322
 ] 

Abhishek Modi commented on YARN-3879:
-

Thanks [~varun_saxena] [~vrushalic].

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328321#comment-16328321
 ] 

Varun Saxena commented on YARN-3879:


cc [~ozawa], hope you are fine with Abhishek picking this up.

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7761) [UI2]Clicking 'master container log' or 'Link' next to 'log' under application's appAttempt goes to Old UI's Log link

2018-01-16 Thread Vasudevan Skm (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasudevan Skm resolved YARN-7761.
-
Resolution: Duplicate

Duplicate of 7760

 

> [UI2]Clicking 'master container log' or 'Link' next to 'log' under 
> application's appAttempt goes to Old UI's Log link
> -
>
> Key: YARN-7761
> URL: https://issues.apache.org/jira/browse/YARN-7761
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Vasudevan Skm
>Priority: Major
>
> Clicking 'master container log' or 'Link' next to 'Log' under application's 
> appAttempt goes to Old UI's Log link



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7764) Findbugs warning: Resource#getResources may expose internal representation

2018-01-16 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7764:
-

 Summary: Findbugs warning: Resource#getResources may expose 
internal representation
 Key: YARN-7764
 URL: https://issues.apache.org/jira/browse/YARN-7764
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api
Affects Versions: 3.0.0, 3.1.0
Reporter: Weiwei Yang


Hadoop qbt report:

{noformat}
FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234]
{noformat}

Introduced by YARN-7136.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-16 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328313#comment-16328313
 ] 

Vrushali C commented on YARN-3879:
--

thanks [~varun_saxena] ! 

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6736) Consider writing to both ats v1 & v2 from RM for smoother upgrades

2018-01-16 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328312#comment-16328312
 ] 

Vrushali C commented on YARN-6736:
--

Thanks [~agresch] for the patch and [~rohithsharma] for the reviews and commit! 

I had a few basic questions.

 - in ApplicationMaster # init, the patch removes the setting of 
timelineServiceV1Enabled 

at line 638 but uses it at line 709. Since it is not set, this will not invoke 
publishApplicationAttemptEvent I think? 

Similarly for timelineServiceV2Enabled at lines 701. 

- The  Float f = Float.parseFloat(s); is  likely to throw a number format 
exception in case of some misconfiguration in the config values. Perhaps a good 
idea to catch this and ignore and proceed. 

- Not very important but I didn’t get why the conf settings for timeline server 
address & port had to be moved out of synchronized  in MiniYARNCluster were in 
this patch?

> Consider writing to both ats v1 & v2 from RM for smoother upgrades
> --
>
> Key: YARN-6736
> URL: https://issues.apache.org/jira/browse/YARN-6736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Aaron Gresch
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1, yarn-7055
>
> Attachments: YARN-6736-YARN-5355.001.patch, 
> YARN-6736-YARN-5355.002.patch, YARN-6736-YARN-5355.003.patch, 
> YARN-6736-YARN-5355.004.patch, YARN-6736-YARN-5355.005.patch, 
> YARN-6736.001.patch, YARN-6736.002.patch
>
>
> When the cluster is being upgraded from atsv1 to v2, it may be good to have a 
> brief time period during which RM writes to both atsv1 and v2. This will help 
> frameworks like Tez migrate more smoothly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328311#comment-16328311
 ] 

Varun Saxena commented on YARN-3879:


[~abmodi], added you to the list of contributors and assigned the Jira to you

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-16 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned YARN-3879:
--

Assignee: Abhishek Modi

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328306#comment-16328306
 ] 

Weiwei Yang commented on YARN-7757:
---

Fix checkstyle warnings. [~Naganarasimha], [~sunilg], could you please help to 
review the patch? Thanks

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7757:
--
Attachment: YARN-7757-YARN-3409.002.patch

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7757-YARN-3409.001.patch, 
> YARN-7757-YARN-3409.002.patch, nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6856) Support CLI for Node Attributes Mapping

2018-01-16 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328296#comment-16328296
 ] 

Sunil G edited comment on YARN-6856 at 1/17/18 5:58 AM:


Seems it worked. [~Naganarasimha], cud u pls fix other jenkins issues.

 

Also [~cheersyang] has comments


was (Author: sunilg):
Seems it worked. [~Naganarasimha], cud u pls fix other jenkins issues.

> Support CLI for Node Attributes Mapping
> ---
>
> Key: YARN-6856
> URL: https://issues.apache.org/jira/browse/YARN-6856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6856-YARN-3409.001.patch, 
> YARN-6856-YARN-3409.002.patch, YARN-6856-YARN-3409.004.patch, 
> YARN-6856-yarn-3409.003.patch, YARN-6856-yarn-3409.004.patch
>
>
> This focuses on the new CLI for the mapping of Node Attributes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6856) Support CLI for Node Attributes Mapping

2018-01-16 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328296#comment-16328296
 ] 

Sunil G commented on YARN-6856:
---

Seems it worked. [~Naganarasimha], cud u pls fix other jenkins issues.

> Support CLI for Node Attributes Mapping
> ---
>
> Key: YARN-6856
> URL: https://issues.apache.org/jira/browse/YARN-6856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6856-YARN-3409.001.patch, 
> YARN-6856-YARN-3409.002.patch, YARN-6856-YARN-3409.004.patch, 
> YARN-6856-yarn-3409.003.patch, YARN-6856-yarn-3409.004.patch
>
>
> This focuses on the new CLI for the mapping of Node Attributes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6856) Support CLI for Node Attributes Mapping

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328292#comment-16328292
 ] 

genericqa commented on YARN-6856:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 48m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
40s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
7s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 24 new + 26 unchanged 
- 0 fixed = 50 total (was 26) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
22s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 34s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 25s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
48s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
37s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}279m 33s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Assigned] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-16 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-3879:


Assignee: (was: Tsuyoshi Ozawa)

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328275#comment-16328275
 ] 

genericqa commented on YARN-7757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 71m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 51m 
31s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
52s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3409 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 17 new + 306 unchanged - 14 fixed = 323 total (was 320) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 34s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
44s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Should 
org.apache.hadoop.yarn.server.nodemanager.nodelabels.ScriptBasedNodeAttributesProvider$NodeAttributeScriptRunner
 be a _static_ inner class?  At ScriptBasedNodeAttributesProvider.java:inner 
class?  At ScriptBasedNodeAttributesProvider.java:[lines 72-103] |
|  |  Should 
org.apache.hadoop.yarn.server.nodemanager.nodelabels.ScriptBasedNodeLabelsProvider$NodeLabelScriptRunner
 be a _static_ inner class?  At ScriptBasedNodeLabelsProvider.java:inner 

[jira] [Commented] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328255#comment-16328255
 ] 

genericqa commented on YARN-7740:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 21m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 182 unchanged - 2 fixed = 183 total (was 184) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
11s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
29s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7740 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906317/YARN-7740.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 15dc71f9c55d 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 41049ba |
| maven | version: Apache 

[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-16 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328238#comment-16328238
 ] 

Abhishek Modi commented on YARN-3879:
-

Thanks. [~vrushalic] I don't have permission to assign it to myself and attach 
a patch. Could you please assign it to me. I have a patch ready and can attach 
it once it's assigned to me.

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328226#comment-16328226
 ] 

Konstantinos Karanasos commented on YARN-6619:
--

Agreed with deciding it later.

I think we agreed in our last meeting that PlacementConstraints that are 
defined in the SchedulingRequest will have as source tags the tags of that 
SchedulingRequest. So there should be no cases of PlacementConstraints 
associated with empty source tags.

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch, 
> YARN-6619-YARN-6592.004.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328212#comment-16328212
 ] 

genericqa commented on YARN-7758:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
31s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906290/YARN-7758.002.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 84085b2320c4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3bd9ea6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19267/testReport/ |
| Max. process+thread count | 456 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19267/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- 

[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328208#comment-16328208
 ] 

Arun Suresh commented on YARN-6619:
---

bq. could u update java doc to mark unstable of this API?
Ah.. Updated patch (v004) with the unstable annotation.

bq. but in my mind, a SchedulingRequest without PlacementConstraint should be 
treated same as ResourceRequest with resourceName = *.
You mean SchedulingRequests without PlacementConstratints OR tags should be 
treated the same as exiting *  ResourceRequests. But agreed, lets decide that 
later. 

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch, 
> YARN-6619-YARN-6592.004.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6619:
--
Attachment: YARN-6619-YARN-6592.004.patch

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch, 
> YARN-6619-YARN-6592.004.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328202#comment-16328202
 ] 

Wangda Tan commented on YARN-6619:
--

[~asuresh] , could u update java doc to mark unstable of this API?
{quote}If there are no tags in the scheduling request, then the scheduler 
should expect a Placement Constraint, which applies to that specific scheduling 
request - that check I guess is done in YARN-6599
{quote}
I'm not sure if I verified this, but in my mind, a SchedulingRequest without 
PlacementConstraint should be treated same as ResourceRequest with resourceName 
= *. I'm not sure if we should associate an empty tag or default tag with the 
container with null allocation tag, maybe we shouldn't. Since this isn't 
related to the Jira, let's decide it later.

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328200#comment-16328200
 ] 

genericqa commented on YARN-7740:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 1 new + 40 unchanged - 2 fixed = 41 total (was 42) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 40s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7740 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906308/YARN-7740.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 525439495f66 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3bd9ea6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19266/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19266/testReport/ |
| Max. 

[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328197#comment-16328197
 ] 

Hudson commented on YARN-7758:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #13506 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13506/])
YARN-7758. Add an additional check to the validity of container and (szegedim: 
rev 41049ba5d129f0fd0953ed8fdeb12635f7546bb2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c


> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7756) AMRMProxyService cann't enable ’hadoop.security.authorization‘

2018-01-16 Thread leiqiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leiqiang updated YARN-7756:
---
Attachment: YARN-7756.v0.patch

> AMRMProxyService cann't enable ’hadoop.security.authorization‘
> --
>
> Key: YARN-7756
> URL: https://issues.apache.org/jira/browse/YARN-7756
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0
>Reporter: leiqiang
>Priority: Major
> Attachments: YARN-7756.v0.patch
>
>
> after set hadoop.security.authorization=true, start AMRMProxyService  will 
> has such error
> {quote}org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
>  failed in state STARTED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> org.apache.hadoop.security.authorize.AuthorizationException: Protocol 
> interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB is not known.
>  org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> org.apache.hadoop.security.authorize.AuthorizationException: Protocol 
> interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB is not known.
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:177)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:121)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:250)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:844)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>  at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1114)
>  at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1529)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1803)
>  at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1525)
>  at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1458)
>  Caused by: org.apache.hadoop.security.authorize.AuthorizationException: 
> Protocol interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB is 
> not known.
>  at sun.reflect.GeneratedConstructorAccessor14.newInstance(Unknown Source)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:109)
>  at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  at com.sun.proxy.$Proxy36.registerApplicationMaster(Unknown Source)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:161)
>  ... 14 more
>  Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Protocol interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB is 
> not known.
>  at org.apache.hadoop.ipc.Client.call(Client.java:1476)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1407)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  at com.sun.proxy.$Proxy35.registerApplicationMaster(Unknown Source)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:107)
>  ... 21 more
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328195#comment-16328195
 ] 

Arun Suresh commented on YARN-6619:
---

Update the patch (v003) based on [~kkaranasos]'s and [~leftnoteasy]'s 
suggestions.
* Removed the {{addPlacementConstratint}} method and included it as an optional 
parameter in the register call.
* as per wangda's suggestion, am now just updating the numAllocations instead 
of adding.
* moved the scheduler matching check to a separate function - and removed the 
resource sizing equality check

bq. If there are no tags, do we still associate a constraint with an empty tag?
If there are no tags in the scheduling request, then the scheduler should 
expect a Placement Constraint, which applies to that specific scheduling 
request - that check I guess is done in YARN-6599

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6619:
--
Attachment: YARN-6619-YARN-6592.003.patch

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch, YARN-6619-YARN-6592.003.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328172#comment-16328172
 ] 

Jian He commented on YARN-7740:
---

add one more fix to handle if retrieved ips is empty, should not set in to the 
ips list
{code}
-  status.setIPs(ips == null ? null : Arrays.asList(ips.split(",")));
+  status.setIPs(StringUtils.isEmpty(ips) ? null :
+  Arrays.asList(ips.split(",")));
{code}

> Fix logging for destroy yarn service cli when app does not exist and some 
> minor bugs
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.05.patch, 
> YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328173#comment-16328173
 ] 

Miklos Szegedi commented on YARN-7758:
--

Committed to trunk. Thank you for the contribution [~yufeigu]!

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist and some minor bugs

2018-01-16 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Summary: Fix logging for destroy yarn service cli when app does not exist 
and some minor bugs  (was: Fix logging for destroy yarn service cli when app 
does not exist)

> Fix logging for destroy yarn service cli when app does not exist and some 
> minor bugs
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.05.patch, 
> YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-16 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.05.patch

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.05.patch, 
> YARN-7740.1.patch, YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7655) avoid AM preemption caused by RRs for specific nodes or racks

2018-01-16 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328166#comment-16328166
 ] 

Yufei Gu commented on YARN-7655:


Do several AMs end up running on a limited number of NM? If the answer is Yes, 
what I am guessing is that Spark's dynamic allocation and {{assignmultiple}} of 
FS work together to make that. With Spark's dynamic allocation, an app may only 
have one AM container for a while before it requires tasks containers(That's 
what I am guessing though, I'm not familiar with Spark). Several AMs may be 
allocated into one or very few NMs while assignmultiple is on. This also aligns 
with -1 maxAMShare in your setting.

If that so, YARN-1042(anti-affinity) would help, but it isn't done yet. The 
MAPREDUCE-6871 provide a hacky way to do that as well. You can specify 
nodes/racks for AMs, so you can spread out AMs. This feature is not only for 
MR, Spark AM did the similar implementation. 

> avoid AM preemption caused by RRs for specific nodes or racks
> -
>
> Key: YARN-7655
> URL: https://issues.apache.org/jira/browse/YARN-7655
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: Steven Rand
>Assignee: Steven Rand
>Priority: Major
> Attachments: YARN-7655-001.patch
>
>
> We frequently see AM preemptions when 
> {{starvedApp.getStarvedResourceRequests()}} in 
> {{FSPreemptionThread#identifyContainersToPreempt}} includes one or more RRs 
> that request containers on a specific node. Since this causes us to only 
> consider one node to preempt containers on, the really good work that was 
> done in YARN-5830 doesn't save us from AM preemption. Even though there might 
> be multiple nodes on which we could preempt enough non-AM containers to 
> satisfy the app's starvation, we often wind up preempting one or more AM 
> containers on the single node that we're considering.
> A proposed solution is that if we're going to preempt one or more AM 
> containers for an RR that specifies a node or rack, then we should instead 
> expand the search space to consider all nodes. That way we take advantage of 
> YARN-5830, and only preempt AMs if there's no alternative. I've attached a 
> patch with an initial implementation of this. We've been running it on a few 
> clusters, and have seen AM preemptions drop from double-digit occurrences on 
> many days to zero.
> Of course, the tradeoff is some loss of locality, since the starved app is 
> less likely to be allocated resources at the most specific locality level 
> that it asked for. My opinion is that this tradeoff is worth it, but 
> interested to hear what others think as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328131#comment-16328131
 ] 

genericqa commented on YARN-7758:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
32s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906290/YARN-7758.002.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 79de3e0b76fe 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3bd9ea6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/19262/testReport/ |
| Max. process+thread count | 437 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/19262/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- 

[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328112#comment-16328112
 ] 

Konstantinos Karanasos commented on YARN-6619:
--

If YARN-6599 does not need it (which, thinking more about it, it should not 
need it), then I would prefer to remove it as well. 

Given that we want constraints get added only with the registerAM, removing it 
will avoid misuse of it (as in calling it after calling registerAM). We can add 
it back when needed.

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328105#comment-16328105
 ] 

Wangda Tan commented on YARN-6619:
--

[~kkaranasos] , is this supported? 
{quote}bq.  This way we can reserve the {{addPlacementConstraint}} API for 
cases that a client wants to update the constraints.
{quote}
IIUC, the only way to update PlacementConstraint, as well as pending 
ResourceSizing, is to send a new SchedulingRequest correct? If there's no 
separate updatePlacementConstraint API, I suggest to remove 
addPlacementConstraint from AMRMClient as of now and add it back when needed. 

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6856) Support CLI for Node Attributes Mapping

2018-01-16 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328087#comment-16328087
 ] 

Naganarasimha G R commented on YARN-6856:
-

triggered manual build 19263.

> Support CLI for Node Attributes Mapping
> ---
>
> Key: YARN-6856
> URL: https://issues.apache.org/jira/browse/YARN-6856
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6856-YARN-3409.001.patch, 
> YARN-6856-YARN-3409.002.patch, YARN-6856-YARN-3409.004.patch, 
> YARN-6856-yarn-3409.003.patch, YARN-6856-yarn-3409.004.patch
>
>
> This focuses on the new CLI for the mapping of Node Attributes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328069#comment-16328069
 ] 

Konstantinos Karanasos commented on YARN-6619:
--

Thanks for the patch, [~asuresh]. Couple of comments:
 * Let's add a new {{registerAM}} API to the {{AMRMClient}} that takes a map 
from tag sets to constraints. This way we can reserve the 
{{addPlacementConstraint}} API for cases that a client wants to update the 
constraints. By extending the {{registerAM}} method, we don't need to worry 
about the client calling the {{addPlacementConstraint}} before calling the 
{{registerAM}}.
 * I think it's more precise to keep the {{addConstraint}} method to its 
current name (no "default"). If a scheduling request wants to override those, 
it's fine, but I wouldn't call them default.
 * If there are no tags, do we still associate a constraint with an empty tag?
 * {{addToOutstandingSchedulingRequests()}}: let's put the long if statement in 
a new method.

Nit: in AMRMClientAsync, "called when the RM has rejected a Scheduling 
Requests" (let's remove either the a or the plural).

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328065#comment-16328065
 ] 

Miklos Szegedi commented on YARN-7758:
--

+1 Pending Jenkins. Thank you for the contribution [~yufeigu]

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-16 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328064#comment-16328064
 ] 

Jian He commented on YARN-7740:
---

I made one more change to make the status return in JSON format, rather the 
object.toString format

{code}

--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
@@ -931,7 +931,7 @@ public String getStatusString(String appIdOrName)
 } catch (IllegalArgumentException e) {
 // not appId format, it could be appName.
 Service status = getStatus(appIdOrName);
- return status.toString();
+ return ServiceApiUtil.jsonSerDeser.toJson(status);
 }

{code}

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.1.patch, 
> YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7740) Fix logging for destroy yarn service cli when app does not exist

2018-01-16 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7740:
--
Attachment: YARN-7740.04.patch

> Fix logging for destroy yarn service cli when app does not exist
> 
>
> Key: YARN-7740
> URL: https://issues.apache.org/jira/browse/YARN-7740
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Yesha Vora
>Assignee: Jian He
>Priority: Major
> Attachments: YARN-7740.04.patch, YARN-7740.1.patch, 
> YARN-7740.2.patch, YARN-7740.3.patch
>
>
> Scenario:
> Run "yarn app -destroy" cli with a application name which does not exist.
> Here, The cli should return a message " Application does not exists" instead 
> it is returning a message "Destroyed cluster httpd-xxx"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5473) Expose per-application over-allocation info in the Resource Manager

2018-01-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328063#comment-16328063
 ] 

Haibo Chen commented on YARN-5473:
--

The unit test failure is MAPREDUCE-7020. Findbug is unrelated.

> Expose per-application over-allocation info in the Resource Manager
> ---
>
> Key: YARN-5473
> URL: https://issues.apache.org/jira/browse/YARN-5473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-5473-YARN-1011.00.patch, 
> YARN-5473-YARN-1011.01.patch, YARN-5473-YARN-1011.prelim.patch
>
>
> When enabling over-allocation of nodes, the resources in the cluster change. 
> We need to surface this information for users to understand these changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328058#comment-16328058
 ] 

Wangda Tan commented on YARN-6619:
--

Quickly discussed with Arun, we will keep the main logic of AMRMClientImpl 
changes to make sure AMRMClient can resend estimated pending requests to RM on 
RM failover. 

Following changes needed: 

- We don't need to increase #pending_num_allocation for a request with the same 
schedulerRequestKey. Instead, we should replace #pending-ask with the new 
number. And no need to check resourceSizing.

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-16 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328037#comment-16328037
 ] 

Weiwei Yang commented on YARN-7763:
---

Thanks [~leftnoteasy], [~asuresh], happy to help. I will look into this today, 
a few hours later. Thanks.

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6619) AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest objects

2018-01-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328034#comment-16328034
 ] 

Wangda Tan commented on YARN-6619:
--

1) AMRMClient.java

1.1
addPlacementConstraint: maybe change a better name, such as 
addDefaultPlacementConstraintMapping(...). Existing naming is a bit confused 
with addSchedulingRequests since PlacementConstraint is a part of 
SchedulingRequest.

1.2 
Mark both methods to unstable.

1.3
Same applies to AMRMClientAsync.java changes.

2) AMRMClientImpl.java

I found the implementation uses the similar approach of ResourceRequest which 
has a local cached pending request and deduct pending request for container 
allocation, etc. We know the downside of this approach is it has local states 
and sometimes it could receive extra containers from RM.

I propose to simplify this approach: 
- Don't cache local requests, all requests will be cleared after sending to RM.
- Because of above, don't need to do anything after container received.

3) Test cases failure related?

> AMRMClient Changes to use the PlacementConstraint and SchcedulingRequest 
> objects
> 
>
> Key: YARN-6619
> URL: https://issues.apache.org/jira/browse/YARN-6619
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Major
> Attachments: YARN-6619-YARN-6592.001.patch, 
> YARN-6619-YARN-6592.002.patch
>
>
> Opening this JIRA to track changes needed in the AMRMClient to incorporate 
> the PlacementConstraint and SchedulingRequest objects



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-16 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-7763:
-

Assignee: Weiwei Yang  (was: Arun Suresh)

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Weiwei Yang
>Priority: Blocker
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328033#comment-16328033
 ] 

Arun Suresh commented on YARN-7763:
---

[~cheersyang] has expressed interest - so assigning it to him to spread the 
load.

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Arun Suresh
>Priority: Blocker
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328024#comment-16328024
 ] 

Wangda Tan commented on YARN-7763:
--

[~asuresh] volunteered to take this Jira :)

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Blocker
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reassigned YARN-7763:


Assignee: Arun Suresh

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Arun Suresh
>Priority: Blocker
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5473) Expose per-application over-allocation info in the Resource Manager

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328005#comment-16328005
 ] 

genericqa commented on YARN-5473:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
53s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
19s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-1011 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
45s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 15s{color} | {color:orange} root: The patch generated 8 new + 1770 unchanged 
- 20 fixed = 1778 total (was 1790) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 33s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
19s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
49s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m 
25s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m 
21s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Updated] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7757:
--
Attachment: YARN-7757-YARN-3409.001.patch

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7757-YARN-3409.001.patch, 
> nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7757) Refactor NodeLabelsProvider to be more generic and reusable for node attributes providers

2018-01-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-7757:
--
Attachment: (was: YARN-7757-YARN-3409.001.patch)

> Refactor NodeLabelsProvider to be more generic and reusable for node 
> attributes providers
> -
>
> Key: YARN-7757
> URL: https://issues.apache.org/jira/browse/YARN-7757
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-7757-YARN-3409.001.patch, 
> nodeLabelsProvider_refactor_class_hierarchy.pdf
>
>
> Propose to do refactor on {{NodeLabelsProvider}}, 
> {{AbstractNodeLabelsProvider}} to be more generic, so node attributes 
> providers can reuse these interface/abstract classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-16 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7763:
--
Description: 
As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
PlacementConstraintUtil method and both of processor/scheduler implementation 
will use the same logic. The logic looks like:
{code:java}
PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
If (pc == null) {
  pc = 
PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
}

// Do placement constraint match ...{code}

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Blocker
>
> As I mentioned on YARN-6599, we will add SchedulingRequest as part of the 
> PlacementConstraintUtil method and both of processor/scheduler implementation 
> will use the same logic. The logic looks like:
> {code:java}
> PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
> If (pc == null) {
>   pc = 
> PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
> }
> // Do placement constraint match ...{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-16 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-7763:
--
Environment: (was: As I mentioned on YARN-6599, we will add 
SchedulingRequest as part of the PlacementConstraintUtil method and both of 
processor/scheduler implementation will use the same logic. The logic looks 
like:
{code:java}
PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
If (pc == null) {
  pc = 
PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
}

// Do placement constraint match ...{code})

> Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can 
> use the same API and implementation
> -
>
> Key: YARN-7763
> URL: https://issues.apache.org/jira/browse/YARN-7763
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7763) Refactoring PlacementConstraintUtils APIs so PlacementProcessor/Scheduler can use the same API and implementation

2018-01-16 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7763:


 Summary: Refactoring PlacementConstraintUtils APIs so 
PlacementProcessor/Scheduler can use the same API and implementation
 Key: YARN-7763
 URL: https://issues.apache.org/jira/browse/YARN-7763
 Project: Hadoop YARN
  Issue Type: Sub-task
 Environment: As I mentioned on YARN-6599, we will add 
SchedulingRequest as part of the PlacementConstraintUtil method and both of 
processor/scheduler implementation will use the same logic. The logic looks 
like:
{code:java}
PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
If (pc == null) {
  pc = 
PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
}

// Do placement constraint match ...{code}
Reporter: Wangda Tan






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7528) Resource types that use units need to be defined at RM level and NM level or when using small units you will overflow max_allocation calculation

2018-01-16 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327989#comment-16327989
 ] 

Grant Sohn commented on YARN-7528:
--

Yes.

> Resource types that use units need to be defined at RM level and NM level or 
> when using small units you will overflow max_allocation calculation
> 
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327987#comment-16327987
 ] 

Wangda Tan commented on YARN-6599:
--

ver.11 patch has some issues, attached ver.12 patch. 

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.009.patch, 
> YARN-6599-YARN-6592.010.patch, YARN-6599-YARN-6592.011.patch, 
> YARN-6599-YARN-6592.012.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6599) Support rich placement constraints in scheduler

2018-01-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6599:
-
Attachment: YARN-6599-YARN-6592.012.patch

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.009.patch, 
> YARN-6599-YARN-6592.010.patch, YARN-6599-YARN-6592.011.patch, 
> YARN-6599-YARN-6592.012.patch, YARN-6599-YARN-6592.wip.002.patch, 
> YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327977#comment-16327977
 ] 

genericqa commented on YARN-6599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 23 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  4m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
43s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
29s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
59s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
17s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
55s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 55s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  6s{color} | {color:orange} root: The patch generated 131 new + 1620 
unchanged - 19 fixed = 1751 total (was 1639) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
39s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} 

[jira] [Commented] (YARN-6486) FairScheduler: Deprecate continuous scheduling

2018-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327978#comment-16327978
 ] 

Hudson commented on YARN-6486:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #13504 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13504/])
YARN-6486. FairScheduler: Deprecate continuous scheduling. (Contributed (yufei: 
rev 370f1c6283813dc1c7d001f44930e3c79c140c54)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSOpDurations.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java


> FairScheduler: Deprecate continuous scheduling
> --
>
> Key: YARN-6486
> URL: https://issues.apache.org/jira/browse/YARN-6486
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6486.001.patch, YARN-6486.002.patch, 
> YARN-6486.003.patch, YARN-6486.004.patch, YARN-6486.005.patch, 
> YARN-6486.006.patch
>
>
> Mark continuous scheduling as deprecated in 2.9 and remove the code in 3.0. 
> Removing continuous scheduling from the code will be logged as a separate jira



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7528) Resource types that use units need to be defined at RM level and NM level or when using small units you will overflow max_allocation calculation

2018-01-16 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327964#comment-16327964
 ] 

Szilard Nemeth commented on YARN-7528:
--

[~grant.sohn] Do you mean to define a resource with the unit of "m" in the 
resource-types.xml and request the resource with a job without units? Thanks!

> Resource types that use units need to be defined at RM level and NM level or 
> when using small units you will overflow max_allocation calculation
> 
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6486) FairScheduler: Deprecate continuous scheduling

2018-01-16 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327945#comment-16327945
 ] 

Yufei Gu commented on YARN-6486:


Committed to trunk. Thanks [~wilfreds] for the patch.

> FairScheduler: Deprecate continuous scheduling
> --
>
> Key: YARN-6486
> URL: https://issues.apache.org/jira/browse/YARN-6486
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-6486.001.patch, YARN-6486.002.patch, 
> YARN-6486.003.patch, YARN-6486.004.patch, YARN-6486.005.patch, 
> YARN-6486.006.patch
>
>
> Mark continuous scheduling as deprecated in 2.9 and remove the code in 3.0. 
> Removing continuous scheduling from the code will be logged as a separate jira



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327936#comment-16327936
 ] 

Yufei Gu commented on YARN-7758:


Uploaded the patch v2. Fixed a potential memory leak in \{{create_log_dir()}}.
 

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7758:
---
Attachment: YARN-7758.002.patch

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch, YARN-7758.002.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3660) [GPG] Federation Global Policy Generator (service hook only)

2018-01-16 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-3660:
---
Attachment: YARN-3660-YARN-7402.v3.patch

> [GPG] Federation Global Policy Generator (service hook only)
> 
>
> Key: YARN-3660
> URL: https://issues.apache.org/jira/browse/YARN-3660
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Botong Huang
>Priority: Major
>  Labels: federation, gpg
> Attachments: YARN-3660-YARN-7402.v1.patch, 
> YARN-3660-YARN-7402.v2.patch, YARN-3660-YARN-7402.v3.patch
>
>
> In a federated environment, local impairments of one sub-cluster might 
> unfairly affect users/queues that are mapped to that sub-cluster. A 
> centralized component (GPG) runs out-of-band and edits the policies governing 
> how users/queues are allocated to sub-clusters. This allows us to enforce 
> global invariants (by dynamically updating locally-enforced invariants).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327909#comment-16327909
 ] 

Yufei Gu commented on YARN-7758:


Uploaded the patch v1. Using the existing container id validation function to 
check the container id. 

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7758:
---
Affects Version/s: 2.10.0
   3.1.0

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7758:
---
Component/s: nodemanager

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7758:
---
Attachment: YARN-7758.001.patch

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
> Attachments: YARN-7758.001.patch
>
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7762) ATS uses timeline service config to identify local hostname

2018-01-16 Thread NITHIN MAHESH (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NITHIN MAHESH updated YARN-7762:

Attachment: YARN-7762.patch

> ATS uses timeline service config to identify local hostname
> ---
>
> Key: YARN-7762
> URL: https://issues.apache.org/jira/browse/YARN-7762
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: NITHIN MAHESH
>Priority: Major
> Attachments: YARN-7762.patch
>
>
> In ApplicationHistoryServer.doSecureLogin(), the local hostname is got by 
> calling getBindAddress() which gets the hostname that is defined by the 
> config yarn.timeline-service.address. This is a bug and doSecureLogin should 
> just get local host name directly instead of this way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7728) Expose and expand container preemptions in Capacity Scheduler queue metrics

2018-01-16 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327877#comment-16327877
 ] 

Eric Payne commented on YARN-7728:
--

Hi [~sunilg]. Did you have a chance to review my comments and the new patch? I 
would be interested in your comments.

> Expose and expand container preemptions in Capacity Scheduler queue metrics
> ---
>
> Key: YARN-7728
> URL: https://issues.apache.org/jira/browse/YARN-7728
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 2.8.3, 3.0.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7728.001.patch, YARN-7728.002.patch
>
>
> YARN-1047 exposed queue metrics for the number of preempted containers to the 
> fair scheduler. I would like to also expose these to the capacity scheduler 
> and add metrics for the amount of lost memory seconds and vcore seconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7516) Security check for untrusted docker image

2018-01-16 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327865#comment-16327865
 ] 

Eric Yang commented on YARN-7516:
-

The failed unit tests are not related to this Jira.

> Security check for untrusted docker image
> -
>
> Key: YARN-7516
> URL: https://issues.apache.org/jira/browse/YARN-7516
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7516.001.patch, YARN-7516.002.patch, 
> YARN-7516.003.patch, YARN-7516.004.patch, YARN-7516.005.patch, 
> YARN-7516.006.patch, YARN-7516.007.patch, YARN-7516.008.patch, 
> YARN-7516.009.patch
>
>
> Hadoop YARN Services can support using private docker registry image or 
> docker image from docker hub.  In current implementation, Hadoop security is 
> enforced through username and group membership, and enforce uid:gid 
> consistency in docker container and distributed file system.  There is cloud 
> use case for having ability to run untrusted docker image on the same cluster 
> for testing.  
> The basic requirement for untrusted container is to ensure all kernel and 
> root privileges are dropped, and there is no interaction with distributed 
> file system to avoid contamination.  We can probably enforce detection of 
> untrusted docker image by checking the following:
> # If docker image is from public docker hub repository, the container is 
> automatically flagged as insecure, and disk volume mount are disabled 
> automatically, and drop all kernel capabilities.
> # If docker image is from private repository in docker hub, and there is a 
> white list to allow the private repository, disk volume mount is allowed, 
> kernel capabilities follows the allowed list.
> # If docker image is from private trusted registry with image name like 
> "private.registry.local:5000/centos", and white list allows this private 
> trusted repository.  Disk volume mount is allowed, kernel capabilities 
> follows the allowed list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7762) ATS uses timeline service config to identify local hostname

2018-01-16 Thread NITHIN MAHESH (JIRA)
NITHIN MAHESH created YARN-7762:
---

 Summary: ATS uses timeline service config to identify local 
hostname
 Key: YARN-7762
 URL: https://issues.apache.org/jira/browse/YARN-7762
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: NITHIN MAHESH


In ApplicationHistoryServer.doSecureLogin(), the local hostname is got by 
calling getBindAddress() which gets the hostname that is defined by the config 
yarn.timeline-service.address. This is a bug and doSecureLogin should just get 
local host name directly instead of this way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7590) Improve container-executor validation check

2018-01-16 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Attachment: YARN-7590.branch-2.6.000.patch

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch, 
> YARN-7590.006.patch, YARN-7590.007.patch, YARN-7590.008.patch, 
> YARN-7590.009.patch, YARN-7590.010.patch, YARN-7590.branch-2.000.patch, 
> YARN-7590.branch-2.6.000.patch, YARN-7590.branch-2.7.000.patch, 
> YARN-7590.branch-2.8.000.patch, YARN-7590.branch-2.9.000.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7590) Improve container-executor validation check

2018-01-16 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7590:

Attachment: YARN-7590.branch-2.7.000.patch

> Improve container-executor validation check
> ---
>
> Key: YARN-7590
> URL: https://issues.apache.org/jira/browse/YARN-7590
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, yarn
>Affects Versions: 2.0.1-alpha, 2.2.0, 2.3.0, 2.4.0, 2.5.0, 2.6.0, 2.7.0, 
> 2.8.0, 2.8.1, 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.1
>
> Attachments: YARN-7590.001.patch, YARN-7590.002.patch, 
> YARN-7590.003.patch, YARN-7590.004.patch, YARN-7590.005.patch, 
> YARN-7590.006.patch, YARN-7590.007.patch, YARN-7590.008.patch, 
> YARN-7590.009.patch, YARN-7590.010.patch, YARN-7590.branch-2.000.patch, 
> YARN-7590.branch-2.7.000.patch, YARN-7590.branch-2.8.000.patch, 
> YARN-7590.branch-2.9.000.patch
>
>
> There is minimum check for prefix path for container-executor.  If YARN is 
> compromised, attacker  can use container-executor to change system files 
> ownership:
> {code}
> /usr/local/hadoop/bin/container-executor spark yarn 0 etc /home/yarn/tokens 
> /home/spark / ls
> {code}
> This will change /etc to be owned by spark user:
> {code}
> # ls -ld /etc
> drwxr-s---. 110 spark hadoop 8192 Nov 21 20:00 /etc
> {code}
> Spark user can rewrite /etc files to gain more access.  We can improve this 
> with additional check in container-executor:
> # Make sure the prefix path is owned by the same user as the caller to 
> container-executor.
> # Make sure the log directory prefix is owned by the same user as the caller.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7738) CapacityScheduler: Support refresh maximum allocation for multiple resource types

2018-01-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327806#comment-16327806
 ] 

Wangda Tan commented on YARN-7738:
--

[~sunilg] could u help check the latest patch? Test failure should not related.

> CapacityScheduler: Support refresh maximum allocation for multiple resource 
> types
> -
>
> Key: YARN-7738
> URL: https://issues.apache.org/jira/browse/YARN-7738
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Tan, Wangda
>Priority: Blocker
> Fix For: 3.1.0, 3.0.1
>
> Attachments: YARN-7738.001.patch, YARN-7738.002.patch, 
> YARN-7738.003.patch, YARN-7738.004.patch
>
>
> Currently CapacityScheduler fails to refresh maximum allocation for multiple 
> resource types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7528) Resource types that use units need to be defined at RM level and NM level or when using small units you will overflow max_allocation calculation

2018-01-16 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327796#comment-16327796
 ] 

Grant Sohn commented on YARN-7528:
--

I believe when the resource type unit is `m` you use a value without units to 
trigger overflow.

> Resource types that use units need to be defined at RM level and NM level or 
> when using small units you will overflow max_allocation calculation
> 
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6486) FairScheduler: Deprecate continuous scheduling

2018-01-16 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327729#comment-16327729
 ] 

genericqa commented on YARN-6486:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 20 unchanged - 1 fixed = 20 total (was 21) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 253 unchanged - 8 fixed = 254 total (was 261) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 55s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  4s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-6486 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12906161/YARN-6486.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux df6ebcc4d76c 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5ac1099 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/19260/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit 

[jira] [Created] (YARN-7761) [UI2]Clicking 'master container log' or 'Link' next to 'log' under application's appAttempt goes to Old UI's Log link

2018-01-16 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-7761:


 Summary: [UI2]Clicking 'master container log' or 'Link' next to 
'log' under application's appAttempt goes to Old UI's Log link
 Key: YARN-7761
 URL: https://issues.apache.org/jira/browse/YARN-7761
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Sumana Sathish
Assignee: Vasudevan Skm


Clicking 'master container log' or 'Link' next to 'Log' under application's 
appAttempt goes to Old UI's Log link



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6599) Support rich placement constraints in scheduler

2018-01-16 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327670#comment-16327670
 ] 

Wangda Tan commented on YARN-6599:
--

Thanks comments from [~sunilg] / [~asuresh], 

For comments from [~sunilg] ,
 # Removed all APP_ID_TAG_PREFIX, it should be replaced by 
allocationTagIntraApp already.
 # The reason is we want to make sure: a. Since this is a new code, we will not 
accept SchedulingRequest as it may cause damages to RM. b. PlacementProcessor 
is doing the same thing with a different approach, user needs to enable one of 
them to work. 
 # Currently, intra-app anti-affinity is not mutual, which means if 
SchedulRequest_A (with allocation tag x) want to anti-affinity to allocation 
tags=y,  the other SchedulingRequest_B (with allocation tag y) need explicitly 
specify anti-affinity to allocation tag=x.  
 # Since this involves PlacementProcessor, let's move the change to a different 
Jira. 

For comments from [~asuresh] , 

As we discussed offline, we will add SchedulingRequest as part of the 
PlacementConstraintUtil method and both of processor/scheduler implementation 
will use the same logic. The logic looks like:
{code:java}
PlacementConstraint pc = schedulingRequest.getPlacementConstraint();
If (pc == null) {
  pc = 
PlacementConstraintMgr.getPlacementConstraint(schedulingRequest.getAllocationTags());
}

// Do placement constraint match ...{code}
Let's move it to a separate Jira.

 

Attached ver.11 patch with changes mentioned.

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.009.patch, 
> YARN-6599-YARN-6592.010.patch, YARN-6599-YARN-6592.011.patch, 
> YARN-6599-YARN-6592.wip.002.patch, YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7760) [UI2]Clicking 'Master Node' or link next to 'AM Node Web UI' under application's appAttempt page goes to OLD RM UI

2018-01-16 Thread Sumana Sathish (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sumana Sathish updated YARN-7760:
-
Summary: [UI2]Clicking 'Master Node' or link next to 'AM Node Web UI' under 
application's appAttempt page goes to OLD RM UI  (was: [UI2}Clicking 'Master 
Node' or link next to 'AM Node Web UI' under application's appAttempt page goes 
to OLD RM UI)

> [UI2]Clicking 'Master Node' or link next to 'AM Node Web UI' under 
> application's appAttempt page goes to OLD RM UI
> --
>
> Key: YARN-7760
> URL: https://issues.apache.org/jira/browse/YARN-7760
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Vasudevan Skm
>Priority: Major
>
> Clicking 'Master Node' or link next to 'AM Node Web UI' under application's 
> appAttempt page goes to OLD RM UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7760) [UI2}Clicking 'Master Node' or link next to 'AM Node Web UI' under application's appAttempt page goes to OLD RM UI

2018-01-16 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-7760:


 Summary: [UI2}Clicking 'Master Node' or link next to 'AM Node Web 
UI' under application's appAttempt page goes to OLD RM UI
 Key: YARN-7760
 URL: https://issues.apache.org/jira/browse/YARN-7760
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish
Assignee: Vasudevan Skm


Clicking 'Master Node' or link next to 'AM Node Web UI' under application's 
appAttempt page goes to OLD RM UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7759) [UI2]GPU chart shows as "Available: 0" even though GPU is available

2018-01-16 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-7759:


 Summary: [UI2]GPU chart shows as "Available: 0" even though GPU is 
available
 Key: YARN-7759
 URL: https://issues.apache.org/jira/browse/YARN-7759
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish
Assignee: Vasudevan Skm


GPU chart under Node Manager page shows as zero GPU's available even though GPU 
s are present. Only when we click 'GPU Information' chart, it shows correct GPU 
information



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6599) Support rich placement constraints in scheduler

2018-01-16 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6599:
-
Attachment: YARN-6599-YARN-6592.011.patch

> Support rich placement constraints in scheduler
> ---
>
> Key: YARN-6599
> URL: https://issues.apache.org/jira/browse/YARN-6599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Attachments: YARN-6599-YARN-6592.003.patch, 
> YARN-6599-YARN-6592.004.patch, YARN-6599-YARN-6592.005.patch, 
> YARN-6599-YARN-6592.006.patch, YARN-6599-YARN-6592.007.patch, 
> YARN-6599-YARN-6592.008.patch, YARN-6599-YARN-6592.009.patch, 
> YARN-6599-YARN-6592.010.patch, YARN-6599-YARN-6592.011.patch, 
> YARN-6599-YARN-6592.wip.002.patch, YARN-6599.poc.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-01-16 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327628#comment-16327628
 ] 

Vrushali C commented on YARN-3879:
--

Hi [~abmodi] 

I think you can go ahead and take up this jira. Happy to answer if you need any 
clarifications.

thanks

Vrushali 

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Major
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7758:
-
Description: I would make sure that they contain characters a-z 0-9 and _- 
(underscore and dash)  (was: I would make sure that they contain characters a-z 
0-9 and _/ (underscore and dash))

> Add an additional check to the validity of container and application ids 
> passed to container-executor
> -
>
> Key: YARN-7758
> URL: https://issues.apache.org/jira/browse/YARN-7758
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Yufei Gu
>Priority: Major
>
> I would make sure that they contain characters a-z 0-9 and _- (underscore and 
> dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7758) Add an additional check to the validity of container and application ids passed to container-executor

2018-01-16 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-7758:


 Summary: Add an additional check to the validity of container and 
application ids passed to container-executor
 Key: YARN-7758
 URL: https://issues.apache.org/jira/browse/YARN-7758
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi
Assignee: Yufei Gu


I would make sure that they contain characters a-z 0-9 and _/ (underscore and 
dash)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >