[jira] [Commented] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-04 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426494#comment-16426494
 ] 

Bibin A Chundatt commented on YARN-8117:


Apologies for not adding about the +1 part.

[~rohithsharma] Will wait for sure to get +1. I was expecting to same to happen 
by EOD.
Sorry for the confusion.

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426495#comment-16426495
 ] 

genericqa commented on YARN-8073:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  4m  
4s{color} | {color:red} Docker failed to build yetus/hadoop:dbd69cb. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8073 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917638/YARN-8073-branch-2.03.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20234/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7931) [atsv2 read acls] Include domain table creation as part of schema creator

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426491#comment-16426491
 ] 

Rohith Sharma K S commented on YARN-7931:
-

nit: I see that DomainTable class has our names. Lets make it generic like 
user1 user2 group1 group2 instead of our names. 

> [atsv2 read acls] Include domain table creation as part of schema creator
> -
>
> Key: YARN-7931
> URL: https://issues.apache.org/jira/browse/YARN-7931
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7391.0001.patch, YARN-7391.0002.patch
>
>
>  
> Update the schema creator to create a domain table to store timeline entity 
> domain info. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7931) [atsv2 read acls] Include domain table creation as part of schema creator

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426483#comment-16426483
 ] 

Rohith Sharma K S edited comment on YARN-7931 at 4/5/18 4:55 AM:
-

overall patch looks good to me. 

One doubt in domain table row key. Domain table row key has only domainId. It 
should be appended with clusterId right? Otherwise it wont be able to track it 
for domainId belongs to which cluster right? 



was (Author: rohithsharma):
overall patch looks good to me. 

One doubt in domain table row key. Domain table row key as only domainId. It 
should be appended with clusterId right? Otherwise it wont be able to track it 
for domainId belongs to which cluster right? 


> [atsv2 read acls] Include domain table creation as part of schema creator
> -
>
> Key: YARN-7931
> URL: https://issues.apache.org/jira/browse/YARN-7931
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7391.0001.patch, YARN-7391.0002.patch
>
>
>  
> Update the schema creator to create a domain table to store timeline entity 
> domain info. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7931) [atsv2 read acls] Include domain table creation as part of schema creator

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426483#comment-16426483
 ] 

Rohith Sharma K S commented on YARN-7931:
-

overall patch looks good to me. 

One doubt in domain table row key. Domain table row key as only domainId. It 
should be appended with clusterId right? Otherwise it wont be able to track it 
for domainId belongs to which cluster right? 


> [atsv2 read acls] Include domain table creation as part of schema creator
> -
>
> Key: YARN-7931
> URL: https://issues.apache.org/jira/browse/YARN-7931
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Major
> Attachments: YARN-7391.0001.patch, YARN-7391.0002.patch
>
>
>  
> Update the schema creator to create a domain table to store timeline entity 
> domain info. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426482#comment-16426482
 ] 

Rohith Sharma K S commented on YARN-6936:
-

updated the patch fixing review comments. 

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-04 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6936:

Attachment: YARN-6936.003.patch

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch, YARN-6936.003.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426474#comment-16426474
 ] 

Rohith Sharma K S edited comment on YARN-8117 at 4/5/18 4:38 AM:
-

[~bibinchundatt] As per Apache bylaw, you need to wait for other committers 
give +1. Patch provider can't commit the patch without reviewing by peer 
members. Please wait for peer members to review it. 


was (Author: rohithsharma):
[~bibinchundatt] As per Apache bylaw, you need to wait for other committers 
give +1. The reporter itself can't commit the patch without reviewing by peer 
members. Please wait for peer members to review it. 

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7905) Parent directory permission incorrect during public localization

2018-04-04 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426475#comment-16426475
 ] 

Bibin A Chundatt commented on YARN-7905:


[~jlowe]/[~miklos.szeg...@cloudera.com]/[~rohithsharma]
Any comments from your side??

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905-001.patch, YARN-7905-002.patch, 
> YARN-7905-003.patch, YARN-7905-004.patch, YARN-7905-005.patch, 
> YARN-7905-006.patch, YARN-7905-007.patch, YARN-7905-008.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426474#comment-16426474
 ] 

Rohith Sharma K S commented on YARN-8117:
-

[~bibinchundatt] As per Apache bylaw, you need to wait for other committers 
give +1. The reporter itself can't commit the patch without reviewing by peer 
members. Please wait for peer members to review it. 

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-04 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426467#comment-16426467
 ] 

Bibin A Chundatt edited comment on YARN-8117 at 4/5/18 4:25 AM:


Will commit by EOD if  there are no objections.


was (Author: bibinchundatt):
Committing shortly if  there are no objections.

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426470#comment-16426470
 ] 

genericqa commented on YARN-8073:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m 
41s{color} | {color:red} Docker failed to build yetus/hadoop:dbd69cb. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8073 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917638/YARN-8073-branch-2.03.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20232/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-04 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426467#comment-16426467
 ] 

Bibin A Chundatt commented on YARN-8117:


Committing shortly if  there are no objections.

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426468#comment-16426468
 ] 

genericqa commented on YARN-8073:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  3m  
5s{color} | {color:red} Docker failed to build yetus/hadoop:dbd69cb. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8073 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917638/YARN-8073-branch-2.03.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20231/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8048) Support auto-spawning of admin configured services during bootstrap of rm/apiserver

2018-04-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426462#comment-16426462
 ] 

Wangda Tan commented on YARN-8048:
--

Thanks [~rohithsharma] for updating the patch.
+1 to the latest patch. I hope to get another set of eyes to take a look. 
[~gsaha], could you help to review the patch?

> Support auto-spawning of admin configured services during bootstrap of 
> rm/apiserver
> ---
>
> Key: YARN-8048
> URL: https://issues.apache.org/jira/browse/YARN-8048
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8048.001.patch, YARN-8048.002.patch, 
> YARN-8048.003.patch, YARN-8048.004.patch, YARN-8048.005.patch, 
> YARN-8048.006.patch
>
>
> Goal is to support auto-spawning of admin configured services during 
> bootstrap of resourcemanager/apiserver. 
> *Requirement:* Some of the  services might required to be consumed by yarn 
> itself ex: Hbase for atsv2. Instead of depending on user installed HBase or 
> sometimes user may not required to install HBase at all, in such conditions 
> running HBase app on YARN will help for ATSv2.
> Before YARN cluster is started, admin configure these services spec and place 
> it in common location in HDFS. At the time of RM/apiServer bootstrap, these 
> services will be submitted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426457#comment-16426457
 ] 

Rohith Sharma K S commented on YARN-8073:
-

Btw, this patch can be cherry picked to branch-3.1 as well. 

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426454#comment-16426454
 ] 

Rohith Sharma K S commented on YARN-8073:
-

Anyway I attached branch-2 patch for triggering jenkins. 

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8073:

Attachment: YARN-8073-branch-2.03.patch

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073-branch-2.03.patch, YARN-8073.01.patch, 
> YARN-8073.02.patch, YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426453#comment-16426453
 ] 

Rohith Sharma K S commented on YARN-8073:
-

bq. the patch does not apply as is to branch-2
I just tried to cherry-pick this commit and got succeeded. Could you try it 
again?

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073.01.patch, YARN-8073.02.patch, 
> YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-04 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426398#comment-16426398
 ] 

Eric Yang commented on YARN-7221:
-

[~jlowe] {quote}
I don't think WNOHANG is appropriate here. It will cause the parent process to 
spin continuously as long as the child is running. If we want to keep waiting 
for the child even when EINTR interrupts the wait then I think the loop should 
check for waitpid() == 0 && errno == EINTR.{quote}

WNOHANG flag was used to track all child processes in case interrupted child 
process has not been reported by the while loop.  If exec failed WUNTRACED, the 
unreported child process termination may get caught and getting reported.  The 
usage of -1, was making assumption there is only one child process for the sudo 
call.  I can see that assumption could easy be flawed when more fork exec call 
gets introduce to container-executor.  

Waitpid doesn't seem to set EINTR flag for errno when SIGINT is sent to the 
child process.  Base on testing wait and waitpid both produced the same result 
for EINTR flag.  I don't think we are more accurate on handling abnormal exit 
check for sudo command with waitpid.  SIGINT to the sudo check can only be 
issued by root user to interrupt the check, hence, the chance of someone trying 
to by pass sudo check using signal doesn't exist.  Sorry, I don't know how to 
make this better.  If you have code example of how to make this better, I am 
happy to integrate it into the patch.  At this time, Patch 16 is more correct 
than patch 17.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7142) Support placement policy in yarn native services

2018-04-04 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426386#comment-16426386
 ] 

Gour Saha commented on YARN-7142:
-

Upgrade is in the works and not complete yet. It has been broken into multiple 
tasks to keep the patches from becoming too big. So far only the first patch 
got committed and it might not be a good idea to merge an incomplete feature 
into branch-3.1.

Having said that, the placement policy patch totally makes sense. I am 
attaching a branch-3.1 patch (YARN-7142-branch-3.1.004.patch) for this jira 
with conflicts resolved. You can use this to commit to branch-3.1.

> Support placement policy in yarn native services
> 
>
> Key: YARN-7142
> URL: https://issues.apache.org/jira/browse/YARN-7142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7142-branch-3.1.004.patch, YARN-7142.001.patch, 
> YARN-7142.002.patch, YARN-7142.003.patch, YARN-7142.004.patch
>
>
> Placement policy exists in the API but is not implemented yet.
> I have filed YARN-8074 to move the composite constraints implementation out 
> of this phase-1 implementation of placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7142) Support placement policy in yarn native services

2018-04-04 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-7142:

Attachment: YARN-7142-branch-3.1.004.patch

> Support placement policy in yarn native services
> 
>
> Key: YARN-7142
> URL: https://issues.apache.org/jira/browse/YARN-7142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7142-branch-3.1.004.patch, YARN-7142.001.patch, 
> YARN-7142.002.patch, YARN-7142.003.patch, YARN-7142.004.patch
>
>
> Placement policy exists in the API but is not implemented yet.
> I have filed YARN-8074 to move the composite constraints implementation out 
> of this phase-1 implementation of placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8118) Better utilize gracefully decommissioning node managers

2018-04-04 Thread Karthik Palaniappan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426372#comment-16426372
 ] 

Karthik Palaniappan commented on YARN-8118:
---

CC [~djp], [~danzhi], [~rkanter], [~sunilg], who I think were the main authors 
of graceful decommissioning in YARN. (Please add anybody I missed)

> Better utilize gracefully decommissioning node managers
> ---
>
> Key: YARN-8118
> URL: https://issues.apache.org/jira/browse/YARN-8118
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.2
> Environment: * Google Compute Engine (Dataproc)
>  * Java 8
>  * Hadoop 2.8.2 using client-mode graceful decommissioning
>Reporter: Karthik Palaniappan
>Priority: Major
> Attachments: YARN-8118-branch-2.001.patch
>
>
> Proposal design doc with background + details (please comment directly on 
> doc): 
> [https://docs.google.com/document/d/1hF2Bod_m7rPgSXlunbWGn1cYi3-L61KvQhPlY9Jk9Hk/edit#heading=h.ab4ufqsj47b7]
> tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
> to complete before shutting down, but they cannot run new containers from 
> those in-progress applications. This is wasteful, particularly in 
> environments where you are billed by resource usage (e.g. EC2).
> Proposal: YARN should schedule containers from in-progress applications on 
> DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
> applications. That will make in-progress applications complete faster and let 
> nodes decommission faster. Overall, this should be cheaper.
> I have a working patch without unit tests that's surprisingly just a few real 
> lines of code (patch 001). If folks are happy with the proposal, I'll write 
> unit tests and also write a patch targeted at trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8118) Better utilize gracefully decommissioning node managers

2018-04-04 Thread Karthik Palaniappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palaniappan updated YARN-8118:
--
Attachment: YARN-8118-branch-2.001.patch

> Better utilize gracefully decommissioning node managers
> ---
>
> Key: YARN-8118
> URL: https://issues.apache.org/jira/browse/YARN-8118
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.2
> Environment: * Google Compute Engine (Dataproc)
>  * Java 8
>  * Hadoop 2.8.2 using client-mode graceful decommissioning
>Reporter: Karthik Palaniappan
>Priority: Major
> Attachments: YARN-8118-branch-2.001.patch
>
>
> Proposal design doc with background + details (please comment directly on 
> doc): 
> [https://docs.google.com/document/d/1hF2Bod_m7rPgSXlunbWGn1cYi3-L61KvQhPlY9Jk9Hk/edit#heading=h.ab4ufqsj47b7]
> tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
> to complete before shutting down, but they cannot run new containers from 
> those in-progress applications. This is wasteful, particularly in 
> environments where you are billed by resource usage (e.g. EC2).
> Proposal: YARN should schedule containers from in-progress applications on 
> DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
> applications. That will make in-progress applications complete faster and let 
> nodes decommission faster. Overall, this should be cheaper.
> I have a working patch without unit tests (will attach) that's surprisingly 
> just a few real lines of code. If folks are happy with the proposal, I'll 
> write unit tests and also write a patch targeted at trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8118) Better utilize gracefully decommissioning node managers

2018-04-04 Thread Karthik Palaniappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palaniappan updated YARN-8118:
--
Description: 
Proposal design doc with background + details (please comment directly on doc): 
[https://docs.google.com/document/d/1hF2Bod_m7rPgSXlunbWGn1cYi3-L61KvQhPlY9Jk9Hk/edit#heading=h.ab4ufqsj47b7]

tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
to complete before shutting down, but they cannot run new containers from those 
in-progress applications. This is wasteful, particularly in environments where 
you are billed by resource usage (e.g. EC2).

Proposal: YARN should schedule containers from in-progress applications on 
DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
applications. That will make in-progress applications complete faster and let 
nodes decommission faster. Overall, this should be cheaper.

I have a working patch without unit tests that's surprisingly just a few real 
lines of code (patch 001). If folks are happy with the proposal, I'll write 
unit tests and also write a patch targeted at trunk.

  was:
Proposal design doc with background + details (please comment directly on doc): 
[https://docs.google.com/document/d/1hF2Bod_m7rPgSXlunbWGn1cYi3-L61KvQhPlY9Jk9Hk/edit#heading=h.ab4ufqsj47b7]

tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
to complete before shutting down, but they cannot run new containers from those 
in-progress applications. This is wasteful, particularly in environments where 
you are billed by resource usage (e.g. EC2).

Proposal: YARN should schedule containers from in-progress applications on 
DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
applications. That will make in-progress applications complete faster and let 
nodes decommission faster. Overall, this should be cheaper.

I have a working patch without unit tests (will attach) that's surprisingly 
just a few real lines of code. If folks are happy with the proposal, I'll write 
unit tests and also write a patch targeted at trunk.


> Better utilize gracefully decommissioning node managers
> ---
>
> Key: YARN-8118
> URL: https://issues.apache.org/jira/browse/YARN-8118
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.2
> Environment: * Google Compute Engine (Dataproc)
>  * Java 8
>  * Hadoop 2.8.2 using client-mode graceful decommissioning
>Reporter: Karthik Palaniappan
>Priority: Major
> Attachments: YARN-8118-branch-2.001.patch
>
>
> Proposal design doc with background + details (please comment directly on 
> doc): 
> [https://docs.google.com/document/d/1hF2Bod_m7rPgSXlunbWGn1cYi3-L61KvQhPlY9Jk9Hk/edit#heading=h.ab4ufqsj47b7]
> tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
> to complete before shutting down, but they cannot run new containers from 
> those in-progress applications. This is wasteful, particularly in 
> environments where you are billed by resource usage (e.g. EC2).
> Proposal: YARN should schedule containers from in-progress applications on 
> DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
> applications. That will make in-progress applications complete faster and let 
> nodes decommission faster. Overall, this should be cheaper.
> I have a working patch without unit tests that's surprisingly just a few real 
> lines of code (patch 001). If folks are happy with the proposal, I'll write 
> unit tests and also write a patch targeted at trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8118) Better utilize gracefully decommissioning node managers

2018-04-04 Thread Karthik Palaniappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palaniappan updated YARN-8118:
--
Description: 
Proposal design doc with background + details (please comment directly on doc): 
[https://docs.google.com/document/d/1hF2Bod_m7rPgSXlunbWGn1cYi3-L61KvQhPlY9Jk9Hk/edit#heading=h.ab4ufqsj47b7]

tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
to complete before shutting down, but they cannot run new containers from those 
in-progress applications. This is wasteful, particularly in environments where 
you are billed by resource usage (e.g. EC2).

Proposal: YARN should schedule containers from in-progress applications on 
DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
applications. That will make in-progress applications complete faster and let 
nodes decommission faster. Overall, this should be cheaper.

I have a working patch without unit tests (will attach) that's surprisingly 
just a few real lines of code. If folks are happy with the proposal, I'll write 
unit tests and also write a patch targeted at trunk.

  was:
Proposal design doc with background + details: (will link – please comment on 
that doc)

tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
to complete before shutting down, but they cannot run new containers from those 
in-progress applications. This is wasteful, particularly in environments where 
you are billed by resource usage (e.g. EC2).

Proposal: YARN should schedule containers from in-progress applications on 
DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
applications. That will make in-progress applications complete faster and let 
nodes decommission faster. Overall, this should be cheaper.

I have a working patch without unit tests (will attach) that's surprisingly 
just a few real lines of code. If folks are happy with the proposal, I'll write 
unit tests and also write a patch targeted at trunk.


> Better utilize gracefully decommissioning node managers
> ---
>
> Key: YARN-8118
> URL: https://issues.apache.org/jira/browse/YARN-8118
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.2
> Environment: * Google Compute Engine (Dataproc)
>  * Java 8
>  * Hadoop 2.8.2 using client-mode graceful decommissioning
>Reporter: Karthik Palaniappan
>Priority: Major
>
> Proposal design doc with background + details (please comment directly on 
> doc): 
> [https://docs.google.com/document/d/1hF2Bod_m7rPgSXlunbWGn1cYi3-L61KvQhPlY9Jk9Hk/edit#heading=h.ab4ufqsj47b7]
> tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
> to complete before shutting down, but they cannot run new containers from 
> those in-progress applications. This is wasteful, particularly in 
> environments where you are billed by resource usage (e.g. EC2).
> Proposal: YARN should schedule containers from in-progress applications on 
> DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
> applications. That will make in-progress applications complete faster and let 
> nodes decommission faster. Overall, this should be cheaper.
> I have a working patch without unit tests (will attach) that's surprisingly 
> just a few real lines of code. If folks are happy with the proposal, I'll 
> write unit tests and also write a patch targeted at trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1151) Ability to configure auxiliary services from HDFS-based JAR files

2018-04-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426337#comment-16426337
 ] 

Wangda Tan commented on YARN-1151:
--

Thanks [~xgong], 

+1 to the latest patch. [~rkanter] / [~jlowe] / [~vinodkv], wanna take another 
look at the patch? I plan to commit this patch by tomorrow if no objections.

> Ability to configure auxiliary services from HDFS-based JAR files
> -
>
> Key: YARN-1151
> URL: https://issues.apache.org/jira/browse/YARN-1151
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.1.0-beta, 2.9.0
>Reporter: john lilley
>Assignee: Xuan Gong
>Priority: Major
>  Labels: auxiliary-service, yarn
> Attachments: YARN-1151.1.patch, YARN-1151.2.patch, YARN-1151.3.patch, 
> YARN-1151.4.patch, YARN-1151.5.patch, YARN-1151.6.patch, 
> YARN-1151.branch-2.poc.2.patch, YARN-1151.branch-2.poc.3.patch, 
> YARN-1151.branch-2.poc.patch, [YARN-1151] [Design] Configure auxiliary 
> services from HDFS-based JAR files.pdf
>
>
> I would like to install an auxiliary service in Hadoop YARN without actually 
> installing files/services on every node in the system.  Discussions on the 
> user@ list indicate that this is not easily done.  The reason we want an 
> auxiliary service is that our application has some persistent-data components 
> that are not appropriate for HDFS.  In fact, they are somewhat analogous to 
> the mapper output of MapReduce's shuffle, which is what led me to 
> auxiliary-services in the first place.  It would be much easier if we could 
> just place our service's JARs in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8090) Race conditions in FadvisedChunkedFile

2018-04-04 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-8090:
-
Attachment: YARN-8090.000.patch

> Race conditions in FadvisedChunkedFile
> --
>
> Key: YARN-8090
> URL: https://issues.apache.org/jira/browse/YARN-8090
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Attachments: YARN-8090.000.patch
>
>
> {code:java}
> 11:04:33.605 AM   WARNFadvisedChunkedFile 
> Failed to manage OS cache for 
> /var/run/100/yarn/nm/usercache/systest/appcache/application_1521665017379_0062/output/attempt_1521665017379_0062_m_012797_0/file.out
> EBADF: Bad file descriptor
>   at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native 
> Method)
>   at 
> org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
>   at 
> org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
>   at 
> org.apache.hadoop.mapred.FadvisedChunkedFile.close(FadvisedChunkedFile.java:76)
>   at 
> org.jboss.netty.handler.stream.ChunkedWriteHandler.closeInput(ChunkedWriteHandler.java:303)
>   at 
> org.jboss.netty.handler.stream.ChunkedWriteHandler.discard(ChunkedWriteHandler.java:163)
>   at 
> org.jboss.netty.handler.stream.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:192)
>   at 
> org.jboss.netty.handler.stream.ChunkedWriteHandler.handleUpstream(ChunkedWriteHandler.java:137)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.channelClosed(SimpleChannelUpstreamHandler.java:225)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.handler.codec.replay.ReplayingDecoder.cleanup(ReplayingDecoder.java:570)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.channelClosed(FrameDecoder.java:371)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.cleanup(FrameDecoder.java:493)
>   at 
> org.jboss.netty.handler.codec.frame.FrameDecoder.channelClosed(FrameDecoder.java:371)
>   at 
> org.jboss.netty.handler.ssl.SslHandler.channelClosed(SslHandler.java:1667)
>   at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:88)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>   at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
>   at org.jboss.netty.channel.Channels.fireChannelClosed(Channels.java:468)
>   at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:375)
>   at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:93)
>   at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
>   at 
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
>   at 
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
>   at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
>   at 
> org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
>   at 
> org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426335#comment-16426335
 ] 

genericqa commented on YARN-7574:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 63 new + 566 unchanged - 23 fixed = 629 total (was 589) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 26s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7574 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917616/YARN-7574.7.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c3c15c6abc6c 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3087e89 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20230/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Created] (YARN-8118) Better utilize gracefully decommissioning node managers

2018-04-04 Thread Karthik Palaniappan (JIRA)
Karthik Palaniappan created YARN-8118:
-

 Summary: Better utilize gracefully decommissioning node managers
 Key: YARN-8118
 URL: https://issues.apache.org/jira/browse/YARN-8118
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Affects Versions: 2.8.2
 Environment: * Google Compute Engine (Dataproc)
 * Java 8
 * Hadoop 2.8.2 using client-mode graceful decommissioning
Reporter: Karthik Palaniappan


Proposal design doc with background + details: (will link – please comment on 
that doc)

tl;dr Right now, DECOMMISSIONING nodes must wait for in-progress applications 
to complete before shutting down, but they cannot run new containers from those 
in-progress applications. This is wasteful, particularly in environments where 
you are billed by resource usage (e.g. EC2).

Proposal: YARN should schedule containers from in-progress applications on 
DECOMMISSIONING nodes, but should still avoid scheduling containers from new 
applications. That will make in-progress applications complete faster and let 
nodes decommission faster. Overall, this should be cheaper.

I have a working patch without unit tests (will attach) that's surprisingly 
just a few real lines of code. If folks are happy with the proposal, I'll write 
unit tests and also write a patch targeted at trunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426260#comment-16426260
 ] 

Hudson commented on YARN-8073:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13926 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13926/])
YARN-8073 TimelineClientImpl doesn't honor (vrushali: rev 
345e7624d58a058a1bad666bd1e5ce4b346a9056)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/TimelineUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TestCombinedSystemMetricsPublisher.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java


> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073.01.patch, YARN-8073.02.patch, 
> YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8068) Application Priority field causes NPE in app timeline publish when Hadoop 2.7 based clients to 2.8+

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-8068:

Fix Version/s: (was: 3.0.2)
   3.0.3

> Application Priority field causes NPE in app timeline publish when Hadoop 2.7 
> based clients to 2.8+
> ---
>
> Key: YARN-8068
> URL: https://issues.apache.org/jira/browse/YARN-8068
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.8.3
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Fix For: 3.1.0, 3.0.3
>
> Attachments: YARN-8068.001.patch
>
>
> [TimelineServiceV1Publisher|eclipse-javadoc:%E2%98%82=hadoop-yarn-server-resourcemanager/src%5C/main%5C/java%3Corg.apache.hadoop.yarn.server.resourcemanager.metrics%7BTimelineServiceV1Publisher.java%E2%98%83TimelineServiceV1Publisher].appCreated
>  will cause NPE as we use like below
> {code:java}
> entityInfo.put(ApplicationMetricsConstants.APPLICATION_PRIORITY_INFO, 
> app.getApplicationPriority().getPriority());{code}
> We have to handle this case while recovery.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426247#comment-16426247
 ] 

Vrushali C commented on YARN-8073:
--

[~rohithsharma]  the patch does not apply as is to branch-2. Could you put up a 
patch for branch-2?

{code}
[channapattan hadoop (branch-2)]$ git apply  -v ~/Downloads/YARN-8073.03.patch
Checking patch 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java...
Checking patch 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java...
Hunk #1 succeeded at 3487 (offset -310 lines).
Checking patch 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java...
Checking patch 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/timeline/TimelineUtils.java...
Checking patch 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TestCombinedSystemMetricsPublisher.java...
error: while searching for:
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import org.apache.hadoop.fs.FileContext;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.yarn.api.records.*;

error: patch failed: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TestCombinedSystemMetricsPublisher.java:31
error: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/TestCombinedSystemMetricsPublisher.java:
 patch does not apply
Checking patch 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java...
[channapattan hadoop (branch-2)]$
{code} 

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073.01.patch, YARN-8073.02.patch, 
> YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426246#comment-16426246
 ] 

Vrushali C commented on YARN-8073:
--

Committed to trunk as part of 
https://github.com/apache/hadoop/commit/345e7624d58a058a1bad666bd1e5ce4b346a9056

Will add to branch-2 as well shortly 

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073.01.patch, YARN-8073.02.patch, 
> YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7560) Resourcemanager hangs when resourceUsedWithWeightToResourceRatio return a overflow value

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426235#comment-16426235
 ] 

genericqa commented on YARN-7560:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
29s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7560 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899328/YARN-7560.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a78d19caef78 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3087e89 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20228/testReport/ |
| Max. process+thread count | 814 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20228/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-04 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426220#comment-16426220
 ] 

Suma Shivaprasad commented on YARN-7574:


Uploaded rebased patch with the following chnages

Added fixes to set default node label expression RMAttemptImpl in case of 
dynamic queues which are created after an application submission and chnaged 
validation in Auto Queue Creation tests to submit to that node which has the 
same node label as the default node label expression.

 

Also Added some additional DEBUG level logs in diff classes which helped in 
debugging 

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-04 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.7.patch

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8073) TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration

2018-04-04 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426211#comment-16426211
 ] 

Vrushali C commented on YARN-8073:
--

+1 LGTM

Will check this in now

> TimelineClientImpl doesn't honor yarn.timeline-service.versions configuration
> -
>
> Key: YARN-8073
> URL: https://issues.apache.org/jira/browse/YARN-8073
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8073.01.patch, YARN-8073.02.patch, 
> YARN-8073.03.patch
>
>
> Post YARN-6736, RM support writing into ats v1 and v2 by new configuration 
> setting _yarn.timeline-service.versions_. 
>  Couple of issues observed in deployment are
>  # TimelineClientImpl doesn't honor newly added configuration rather it still 
> get version number from _yarn.timeline-service.version_. This causes not 
> writing into v1.5 API's even though _yarn.timeline-service.versions has 1.5 
> value._ 
>  # Similar line from 1st point, TimelineUtils#timelineServiceV1_5Enabled 
> doesn't honor timeline-service.versions.
>  # JobHistoryEventHandler#serviceInit(), line no 271 check for version number 
> rather than calling YarnConfiguration#timelineServiceV2Enabled
> cc :/ [~agresch]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8111) Simplify PlacementConstraints API by removing allocationTagToIntraApp

2018-04-04 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426200#comment-16426200
 ] 

Konstantinos Karanasos commented on YARN-8111:
--

{quote}[~kkaranasos], the conflict should be caused by YARN-7142. Let me check 
if it is possible to bring that to branch-3.1
{quote}
Thanks [~leftnoteasy] – let me know how it goes.

> Simplify PlacementConstraints API by removing allocationTagToIntraApp
> -
>
> Key: YARN-8111
> URL: https://issues.apache.org/jira/browse/YARN-8111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: YARN-8111.001.patch, YARN-8111.002.patch
>
>
> # Per discussion in YARN-8013, we agree to disallow null value for namespace 
> and default it to SELF.
>  # Remove PlacementConstraints#allocationTagToIntraApp



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1151) Ability to configure auxiliary services from HDFS-based JAR files

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426176#comment-16426176
 ] 

genericqa commented on YARN-1151:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 345 unchanged - 0 fixed = 346 total (was 345) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
25s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-1151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917603/YARN-1151.6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 07be1e6f138a 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3087e89 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426170#comment-16426170
 ] 

genericqa commented on YARN-7900:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 24s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 61 unchanged - 0 fixed = 63 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
12s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 27s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
43s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m 
21s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7900 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-7527) Over-allocate node resource in async-scheduling mode of CapacityScheduler

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426131#comment-16426131
 ] 

genericqa commented on YARN-7527:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 25m  
1s{color} | {color:red} Docker failed to build yetus/hadoop:dbd69cb. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7527 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898790/YARN-7527-branch-2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20229/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Over-allocate node resource in async-scheduling mode of CapacityScheduler
> -
>
> Key: YARN-7527
> URL: https://issues.apache.org/jira/browse/YARN-7527
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: YARN-7527-branch-2.001.patch, YARN-7527.001.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, node resource may be 
> over-allocated since node resource check is ignored.
> {{FiCaSchedulerApp#commonCheckContainerAllocation}} will check whether this 
> node have enough available resource for this proposal and return check result 
> (ture/false), but this result is ignored in {{CapacityScheduler#accept}} as 
> below.
> {noformat}
> commonCheckContainerAllocation(allocation, schedulerContainer);
> {noformat}
> If {{FiCaSchedulerApp#commonCheckContainerAllocation}} returns false, 
> {{CapacityScheduler#accept}} should also return false as below:
> {noformat}
> if (!commonCheckContainerAllocation(allocation, schedulerContainer)) {
>   return false;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6936) [Atsv2] Retrospect storing entities into sub application table from client perspective

2018-04-04 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426097#comment-16426097
 ] 

Haibo Chen commented on YARN-6936:
--

The latest patch looks good to me, with two minor comments. 

1) "with in" => "within"

2)
{code:java}
/**
 * 
 * Send the information of a number of conceptual entities outside the scope
 * of YARN application to the timeline service v.2 collector. It is a blocking
 * API. The method will not return until all the put entities have been
 * persisted.
 * 
 *
 * @param entities the collection of {@link TimelineEntity}
 * @throws IOException if there are I/O errors
 * @throws YarnException if entities are incomplete/invalid
 */
@Public
public abstract void putSubAppEntities(TimelineEntity... entities){code}
Let's be more specific, say 'within the scope of a sub-application' instead of 
'outside the scope of YARN application'. The latter could be User or Queue as 
well.

> [Atsv2] Retrospect storing entities into sub application table from client 
> perspective
> --
>
> Key: YARN-6936
> URL: https://issues.apache.org/jira/browse/YARN-6936
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-6936.000.patch, YARN-6936.001.patch, 
> YARN-6936.002.patch
>
>
> Currently YARN-6734 stores entities into sub application table only if doAs 
> user and submitted users are different. This holds good for Tez kind of use 
> cases. But AM runs as same as submitted user like MR also need to store 
> entities in sub application table so that it could read entities without 
> application id. 
> This would be a point of concern later stages when ATSv2 is deployed into 
> production. This JIRA is to retrospect decision of storing entities into sub 
> application table based on client side configuration driven rather than user 
> driven. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3401) [Security] users should not be able to create a generic TimelineEntity and associate arbitrary type

2018-04-04 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-3401:


Assignee: Haibo Chen

> [Security] users should not be able to create a generic TimelineEntity and 
> associate arbitrary type
> ---
>
> Key: YARN-3401
> URL: https://issues.apache.org/jira/browse/YARN-3401
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Haibo Chen
>Priority: Major
>  Labels: YARN-5355
>
> IIUC it is possible for users to create a generic TimelineEntity and set an 
> arbitrary entity type. For example, for a YARN app, the right entity API is 
> ApplicationEntity. However, today nothing stops users from instantiating a 
> base TimelineEntity class and set the application type on it. This presents a 
> problem in handling these YARN system entities in the storage layer for 
> example.
> We need to ensure that the API allows only the right type of the class to be 
> created for a given entity type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7560) Resourcemanager hangs when resourceUsedWithWeightToResourceRatio return a overflow value

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7560:

Fix Version/s: (was: 3.0.1)
   3.0.3

> Resourcemanager hangs when  resourceUsedWithWeightToResourceRatio return a 
> overflow value 
> --
>
> Key: YARN-7560
> URL: https://issues.apache.org/jira/browse/YARN-7560
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 3.0.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Fix For: 3.0.3
>
> Attachments: YARN-7560.000.patch, YARN-7560.001.patch
>
>
> In our cluster, we changed the configuration, then refreshQueues, we found 
> the resourcemanager hangs. And the Resourcemanager can't restart 
> successfully. We got jstack information, always show like this:
> {code}
> "main" #1 prio=5 os_prio=0 tid=0x7f98e8017000 nid=0x2f5 runnable 
> [0x7f98eed9a000]
>java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.resourceUsedWithWeightToResourceRatio(ComputeFairShares.java:182)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSharesInternal(ComputeFairShares.java:140)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.ComputeFairShares.computeSteadyShares(ComputeFairShares.java:66)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy.computeSteadyShares(FairSharePolicy.java:148)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.recomputeSteadyShares(FSParentQueue.java:102)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.getQueue(QueueManager.java:148)
> - locked <0x7f8c4a8177a0> (a java.util.HashMap)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.getLeafQueue(QueueManager.java:101)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.updateAllocationConfiguration(QueueManager.java:387)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$AllocationReloadListener.onReload(FairScheduler.java:1728)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadAllocations(AllocationFileLoaderService.java:422)
> - locked <0x7f8c4a7eb2e0> (a 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.initScheduler(FairScheduler.java:1597)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.serviceInit(FairScheduler.java:1621)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c4a76ac48> (a java.lang.Object)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:569)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c49254268> (a java.lang.Object)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:997)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:257)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> - locked <0x7f8c467495e0> (a java.lang.Object)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1220)
> {code}
> When we debug the cluster, we found resourceUsedWithWeightToResourceRatio 
> return a negative value. So the loop can't return. We found in our cluster, 
> the sum of all minRes is over int.max, so 
> resourceUsedWithWeightToResourceRatio return a negative value.
> below is the loop. Because totalResource is long, so always postive. But 
> resourceUsedWithWeightToResourceRatio return int type. Our cluster is so big 
> that resourceUsedWithWeightToResourceRatio will return a overflow value, just 
> a negative. So the loop will never break.
> {code}
> while (resourceUsedWithWeightToResourceRatio(rMax, schedulables, type)
> < totalResource) {
>   rMax *= 2.0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (YARN-7527) Over-allocate node resource in async-scheduling mode of CapacityScheduler

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7527:

Fix Version/s: (was: 3.0.1)
   3.0.3

> Over-allocate node resource in async-scheduling mode of CapacityScheduler
> -
>
> Key: YARN-7527
> URL: https://issues.apache.org/jira/browse/YARN-7527
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Fix For: 3.1.0, 3.0.3
>
> Attachments: YARN-7527-branch-2.001.patch, YARN-7527.001.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, node resource may be 
> over-allocated since node resource check is ignored.
> {{FiCaSchedulerApp#commonCheckContainerAllocation}} will check whether this 
> node have enough available resource for this proposal and return check result 
> (ture/false), but this result is ignored in {{CapacityScheduler#accept}} as 
> below.
> {noformat}
> commonCheckContainerAllocation(allocation, schedulerContainer);
> {noformat}
> If {{FiCaSchedulerApp#commonCheckContainerAllocation}} returns false, 
> {{CapacityScheduler#accept}} should also return false as below:
> {noformat}
> if (!commonCheckContainerAllocation(allocation, schedulerContainer)) {
>   return false;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7901) Adding profile capability in resourceReq in LocalityMulticastAMRMProxyPolicy

2018-04-04 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated YARN-7901:

Fix Version/s: (was: 3.0.1)
   3.0.3

> Adding profile capability in resourceReq in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7901
> URL: https://issues.apache.org/jira/browse/YARN-7901
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: lovekesh bansal
>Assignee: lovekesh bansal
>Priority: Minor
> Fix For: 3.0.3
>
> Attachments: YARN-7901_trunk.001.patch
>
>
> in the splitIndividualAny method while creating the resourceRequest we are 
> not setting the profile capability. 
> ResourceRequest.newInstance(originalResourceRequest.getPriority(),
>  originalResourceRequest.getResourceName(),
>  originalResourceRequest.getCapability(),
>  originalResourceRequest.getNumContainers(),
>  originalResourceRequest.getRelaxLocality(),
>  originalResourceRequest.getNodeLabelExpression(),
>  originalResourceRequest.getExecutionTypeRequest());



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1151) Ability to configure auxiliary services from HDFS-based JAR files

2018-04-04 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426068#comment-16426068
 ] 

Xuan Gong commented on YARN-1151:
-

[~leftnoteasy]

bq. I'm not sure if I understand correct: assume we have multiple services, 
does this logic remove all other service localized jars?

Oh, my fault. Fix the issue.

 

bq. And a minor comment is, it's better to have a separate to handle jar 
download / remove outdated local folder logic.

It might be easier if we can handle them together. When we check whether we 
need to re-download it or not, we will do:
 * if the last modified time does not change, we would break the for loop
 * if it was changed, we would first DEL the previous jar file, then 
re-download the latest jar

 

> Ability to configure auxiliary services from HDFS-based JAR files
> -
>
> Key: YARN-1151
> URL: https://issues.apache.org/jira/browse/YARN-1151
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.1.0-beta, 2.9.0
>Reporter: john lilley
>Assignee: Xuan Gong
>Priority: Major
>  Labels: auxiliary-service, yarn
> Attachments: YARN-1151.1.patch, YARN-1151.2.patch, YARN-1151.3.patch, 
> YARN-1151.4.patch, YARN-1151.5.patch, YARN-1151.6.patch, 
> YARN-1151.branch-2.poc.2.patch, YARN-1151.branch-2.poc.3.patch, 
> YARN-1151.branch-2.poc.patch, [YARN-1151] [Design] Configure auxiliary 
> services from HDFS-based JAR files.pdf
>
>
> I would like to install an auxiliary service in Hadoop YARN without actually 
> installing files/services on every node in the system.  Discussions on the 
> user@ list indicate that this is not easily done.  The reason we want an 
> auxiliary service is that our application has some persistent-data components 
> that are not appropriate for HDFS.  In fact, they are somewhat analogous to 
> the mapper output of MapReduce's shuffle, which is what led me to 
> auxiliary-services in the first place.  It would be much easier if we could 
> just place our service's JARs in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-04 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426061#comment-16426061
 ] 

Jason Lowe commented on YARN-7221:
--

Thanks for updating the patch!

I don't think WNOHANG is appropriate here.  It will cause the parent process to 
spin continuously as long as the child is running.  If we want to keep waiting 
for the child even when EINTR interrupts the wait then I think the loop should 
check for waitpid() == 0 && errno == EINTR.

I'm not sure why WUNTRACED would be specified since the parent should only care 
about being notified when the child exits.

Curious, why is the code calling waitpid with -1 instead of the child pid 
returned from the fork call?


> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1151) Ability to configure auxiliary services from HDFS-based JAR files

2018-04-04 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-1151:

Attachment: YARN-1151.6.patch

> Ability to configure auxiliary services from HDFS-based JAR files
> -
>
> Key: YARN-1151
> URL: https://issues.apache.org/jira/browse/YARN-1151
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.1.0-beta, 2.9.0
>Reporter: john lilley
>Assignee: Xuan Gong
>Priority: Major
>  Labels: auxiliary-service, yarn
> Attachments: YARN-1151.1.patch, YARN-1151.2.patch, YARN-1151.3.patch, 
> YARN-1151.4.patch, YARN-1151.5.patch, YARN-1151.6.patch, 
> YARN-1151.branch-2.poc.2.patch, YARN-1151.branch-2.poc.3.patch, 
> YARN-1151.branch-2.poc.patch, [YARN-1151] [Design] Configure auxiliary 
> services from HDFS-based JAR files.pdf
>
>
> I would like to install an auxiliary service in Hadoop YARN without actually 
> installing files/services on every node in the system.  Discussions on the 
> user@ list indicate that this is not easily done.  The reason we want an 
> auxiliary service is that our application has some persistent-data components 
> that are not appropriate for HDFS.  In fact, they are somewhat analogous to 
> the mapper output of MapReduce's shuffle, which is what led me to 
> auxiliary-services in the first place.  It would be much easier if we could 
> just place our service's JARs in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Refactor timelineservice-hbase module into submodules

2018-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426046#comment-16426046
 ] 

Hudson commented on YARN-7919:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13925 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13925/])
YARN-7946. Update TimelineServerV2 doc as per YARN-7919. (Haibo Chen) 
(haibochen: rev 3087e89135365cad7f28f1bf8c9a1c483e245988)
* (edit) BUILDING.txt
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md


> Refactor timelineservice-hbase module into submodules
> -
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Fix For: 3.1.0, 2.10.0
>
> Attachments: YARN-7919-branch-2.05.patch, YARN-7919.00.patch, 
> YARN-7919.01.patch, YARN-7919.02.patch, YARN-7919.03.patch, 
> YARN-7919.04.patch, YARN-7919.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7946) Update TimelineServerV2 doc as per YARN-7919

2018-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426045#comment-16426045
 ] 

Hudson commented on YARN-7946:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13925 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13925/])
YARN-7946. Update TimelineServerV2 doc as per YARN-7919. (Haibo Chen) 
(haibochen: rev 3087e89135365cad7f28f1bf8c9a1c483e245988)
* (edit) BUILDING.txt
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md


> Update TimelineServerV2 doc as per YARN-7919
> 
>
> Key: YARN-7946
> URL: https://issues.apache.org/jira/browse/YARN-7946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7946.00.patch, YARN-7946.01.patch, 
> YARN-7946.02.patch
>
>
> Post YARN-7919, document need to be updated for co processor jar name and 
> other related details if any.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426040#comment-16426040
 ] 

genericqa commented on YARN-7221:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 49s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7221 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917586/YARN-7221.017.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 2a8991ea52da 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7853ec8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20226/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20226/testReport/ |
| Max. process+thread count | 312 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-7667) Docker Stop grace period should be configurable

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426034#comment-16426034
 ] 

genericqa commented on YARN-7667:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
46s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7667 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917580/YARN-7667.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux a5a15ee3bcc9 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 

[jira] [Updated] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2018-04-04 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7213:
-
Release Note: Support HBase 2.0.0-beta1 as an alternative backend for YARN 
ATSv2 in addition to HBase 1.2.6. 

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7213.prelim.patch, YARN-7213.prelim.patch, 
> YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2018-04-04 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-7213.
--
   Resolution: Done
Fix Version/s: 3.2.0

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7213.prelim.patch, YARN-7213.prelim.patch, 
> YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7946) Update TimelineServerV2 doc as per YARN-7919

2018-04-04 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426016#comment-16426016
 ] 

Haibo Chen commented on YARN-7946:
--

Thanks [~rohithsharma] for the review! I have committed the patch to trunk

> Update TimelineServerV2 doc as per YARN-7919
> 
>
> Key: YARN-7946
> URL: https://issues.apache.org/jira/browse/YARN-7946
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7946.00.patch, YARN-7946.01.patch, 
> YARN-7946.02.patch
>
>
> Post YARN-7919, document need to be updated for co processor jar name and 
> other related details if any.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-04-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426010#comment-16426010
 ] 

Weiwei Yang commented on YARN-7494:
---

Hi [~sunilg], have you got the chance to review the UT failures? I can't help 
to review the patch in the concern of there might be missing pieces. I will 
hold on reviewing it until I get some updates from you. Thanks

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.002.patch, 
> YARN-7494.003.patch, YARN-7494.004.patch, YARN-7494.005.patch, 
> YARN-7494.006.patch, YARN-7494.v0.patch, YARN-7494.v1.patch, 
> multi-node-designProposal.png
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8111) Simplify PlacementConstraints API by removing allocationTagToIntraApp

2018-04-04 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16426006#comment-16426006
 ] 

Weiwei Yang commented on YARN-8111:
---

Hi [~kkaranasos]/[~leftnoteasy]

Thanks, it makes sense to me to get YARN-7142 also be committed to branch-3.1. 
So that we can plan to support a complete placement constraint feature in 3.1.1 
release. 

> Simplify PlacementConstraints API by removing allocationTagToIntraApp
> -
>
> Key: YARN-8111
> URL: https://issues.apache.org/jira/browse/YARN-8111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: YARN-8111.001.patch, YARN-8111.002.patch
>
>
> # Per discussion in YARN-8013, we agree to disallow null value for namespace 
> and default it to SELF.
>  # Remove PlacementConstraints#allocationTagToIntraApp



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7142) Support placement policy in yarn native services

2018-04-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425995#comment-16425995
 ] 

Wangda Tan commented on YARN-7142:
--

[~gsaha], does it make sense to bring this fix to branch-3.1 so we can get this 
fix in 3.1.1. If you agree, could you help to create a branch-3.1 patch, I saw 
many conflicts. Maybe caused by the upgrade patch (I will be fine to bring that 
to branch-3.1 as well if needed).

> Support placement policy in yarn native services
> 
>
> Key: YARN-7142
> URL: https://issues.apache.org/jira/browse/YARN-7142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7142.001.patch, YARN-7142.002.patch, 
> YARN-7142.003.patch, YARN-7142.004.patch
>
>
> Placement policy exists in the API but is not implemented yet.
> I have filed YARN-8074 to move the composite constraints implementation out 
> of this phase-1 implementation of placement policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8111) Simplify PlacementConstraints API by removing allocationTagToIntraApp

2018-04-04 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425993#comment-16425993
 ] 

Wangda Tan commented on YARN-8111:
--

[~kkaranasos], the conflict should be caused by YARN-7142. Let me check if it 
is possible to bring that to branch-3.1

> Simplify PlacementConstraints API by removing allocationTagToIntraApp
> -
>
> Key: YARN-8111
> URL: https://issues.apache.org/jira/browse/YARN-8111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: YARN-8111.001.patch, YARN-8111.002.patch
>
>
> # Per discussion in YARN-8013, we agree to disallow null value for namespace 
> and default it to SELF.
>  # Remove PlacementConstraints#allocationTagToIntraApp



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8111) Simplify PlacementConstraints API by removing allocationTagToIntraApp

2018-04-04 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425960#comment-16425960
 ] 

Konstantinos Karanasos commented on YARN-8111:
--

[~cheersyang], this is also not applying to branch-3.1.

The {{Component}} class is complaining. I guess there is another patch that has 
was committed only to trunk. Looping in [~leftnoteasy].

In general, we will keep committing all this to both trunk and branch-3.1, 
right?

> Simplify PlacementConstraints API by removing allocationTagToIntraApp
> -
>
> Key: YARN-8111
> URL: https://issues.apache.org/jira/browse/YARN-8111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: YARN-8111.001.patch, YARN-8111.002.patch
>
>
> # Per discussion in YARN-8013, we agree to disallow null value for namespace 
> and default it to SELF.
>  # Remove PlacementConstraints#allocationTagToIntraApp



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8013) Support application tags when defining application namespaces for placement constraints

2018-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425955#comment-16425955
 ] 

Hudson commented on YARN-8013:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13924 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13924/])
YARN-8013. Support application tags when defining application namespaces 
(kkaranasos: rev 7853ec8d2fb8731b7f7c28fd87491a0a2d47967e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/MockRMApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestAllocationTagsNamespace.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TargetApplicationsNamespace.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagNamespace.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestPlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/PlacementConstraintsUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TargetApplications.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AllocationTagNamespaceType.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/TestAllocationTagsManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/AllocationTags.java


> Support application tags when defining application namespaces for placement 
> constraints
> ---
>
> Key: YARN-8013
> URL: https://issues.apache.org/jira/browse/YARN-8013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8013.001.patch, YARN-8013.002.patch, 
> YARN-8013.003.patch, YARN-8013.004.patch, YARN-8013.005.patch, 
> YARN-8013.006.patch, YARN-8013.007.patch
>
>
> YARN-1461 adds the concept of *Application Tags* to Yarn applications. In 
> particular, a user is able to annotate application with multiple tags to 
> classify apps. We can leverage this to define application namespaces based on 
> application tags for placement constraints.
> Below is a typical use case.
> There are a lot of TF jobs running on Yarn, and some of them are consuming 
> resources heavily. So we want to limit number of PS on each node for such BIG 
> players but ignore those SMALL ones. To achieve this, we can do following 
> steps:
>  # Add application tag "big-tf" to these big TF jobs
>  # For each PS request, we add "ps" source tag and map it to constraint 
> "{color:#d04437}notin, node, tensorflow/ps{color}" or 
> "{color:#d04437}cardinality, node, tensorflow/ps{color}{color:#d04437}, 0, 
> 2{color}" for finer grained controls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8111) Simplify PlacementConstraints API by removing allocationTagToIntraApp

2018-04-04 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-8111:
-
Summary: Simplify PlacementConstraints API by removing 
allocationTagToIntraApp  (was: Simplify PlacementConstraints class API)

> Simplify PlacementConstraints API by removing allocationTagToIntraApp
> -
>
> Key: YARN-8111
> URL: https://issues.apache.org/jira/browse/YARN-8111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: YARN-8111.001.patch, YARN-8111.002.patch
>
>
> # Per discussion in YARN-8013, we agree to disallow null value for namespace 
> and default it to SELF.
>  # Remove PlacementConstraints#allocationTagToIntraApp



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-04 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425947#comment-16425947
 ] 

Eric Yang commented on YARN-7221:
-

[~jlowe] Good catch on wait doesn't catch signals.  Patch 17 contains the 
required changes from your comments.

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7221) Add security check for privileged docker container

2018-04-04 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7221:

Attachment: YARN-7221.017.patch

> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch, YARN-7221.017.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425928#comment-16425928
 ] 

genericqa commented on YARN-7973:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
41s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7973 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917568/YARN-7973.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux b8a7e05d5939 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b779f4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425914#comment-16425914
 ] 

genericqa commented on YARN-8117:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
44s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 66m 
40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8117 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917561/YARN-8117-YARN-3409.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 97ee1404fae2 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-3409 / c20126a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20222/testReport/ |
| Max. process+thread count | 909 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20222/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix 

[jira] [Assigned] (YARN-8112) Fix min cardinality check for same source and target tags in intra-app constraints

2018-04-04 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos reassigned YARN-8112:


Assignee: Konstantinos Karanasos

> Fix min cardinality check for same source and target tags in intra-app 
> constraints
> --
>
> Key: YARN-8112
> URL: https://issues.apache.org/jira/browse/YARN-8112
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Konstantinos Karanasos
>Priority: Major
>
> The min cardinality constraint (min cardinality = _k_) ensures that a 
> container is placed at a node that has already k occurrences of the target 
> tag. For example, a constraint _zk=3,CARDINALITY,NODE,hb,2,10_ will place 
> each of the three zk containers on a node with at least 2 hb instances (and 
> no more than 10 for the max cardinality).
> Affinity constraints is a special case of this where min cardinality is 1.
> Currently we do not support min cardinality when the source and the target of 
> the constraint are the same in an intra-app constraint.
> Therefore, zk=3,CARDINALITY,NODE,zk,2,10 is not supported, and neither is 
> zk=3,IN,NODE,zk.
> This Jira will address this problem by placing the first k containers on the 
> same node (or any other specified scope, e.g., rack), so that min cardinality 
> can be met when placing the subsequent containers with the same tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor

2018-04-04 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7900:
---
Attachment: YARN-7900.v6.patch

> [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor
> 
>
> Key: YARN-7900
> URL: https://issues.apache.org/jira/browse/YARN-7900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-7900.v1.patch, YARN-7900.v2.patch, 
> YARN-7900.v3.patch, YARN-7900.v4.patch, YARN-7900.v5.patch, YARN-7900.v6.patch
>
>
> Inside stateful FederationInterceptor (YARN-7899), we need a component 
> similar to AMRMClient that remembers all pending (outstands) requests we've 
> sent to YarnRM, auto re-register and do full pending resend when YarnRM fails 
> over and throws ApplicationMasterNotRegisteredException back. This JIRA adds 
> this component as AMRMClientRelayer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7667) Docker Stop grace period should be configurable

2018-04-04 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-7667:
--
Attachment: YARN-7667.002.patch

> Docker Stop grace period should be configurable
> ---
>
> Key: YARN-7667
> URL: https://issues.apache.org/jira/browse/YARN-7667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7667.001.patch, YARN-7667.002.patch
>
>
> {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never 
> called. So, the stop uses the 10 second default grace period from docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7667) Docker Stop grace period should be configurable

2018-04-04 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425890#comment-16425890
 ] 

Eric Badger commented on YARN-7667:
---

Attaching new patch to fix checkstyle

> Docker Stop grace period should be configurable
> ---
>
> Key: YARN-7667
> URL: https://issues.apache.org/jira/browse/YARN-7667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7667.001.patch, YARN-7667.002.patch
>
>
> {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never 
> called. So, the stop uses the 10 second default grace period from docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8115) [UI2] URL data like nodeHTTPAddress must be encoded in UI before using to access NM

2018-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425872#comment-16425872
 ] 

Hudson commented on YARN-8115:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13923 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13923/])
YARN-8115. [UI2] URL data like nodeHTTPAddress must be encoded in UI (sunilg: 
rev 42cd367c9308b944bc71de6c07b6c3f028a0d874)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/node-menu-panel.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/helpers/node-link.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes/table.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/node-menu-panel.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-containers.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-containers.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node-apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-container.js


> [UI2] URL data like nodeHTTPAddress must be encoded in UI before using to 
> access NM
> ---
>
> Key: YARN-8115
> URL: https://issues.apache.org/jira/browse/YARN-8115
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8115.001.patch
>
>
> nodeHTTPAddress is picked from RM ,but it should be encoded before processing 
> it from UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7667) Docker Stop grace period should be configurable

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425853#comment-16425853
 ] 

genericqa commented on YARN-7667:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  8s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 218 unchanged - 0 fixed = 219 total (was 218) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
20s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
56s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7667 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917557/YARN-7667.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  

[jira] [Commented] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-04-04 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425842#comment-16425842
 ] 

Shane Kumpf commented on YARN-7973:
---

Thanks for the review [~eyang]! I've attached a new patch that incorporates 
those suggestions.

Regarding suggestion #2 above for docker_util.cc, we may want to open another 
issue to make the other commands consistent as this pattern is used by rm, 
stop, and kill as well. Let me know if you think that's needed.

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7973.001.patch, YARN-7973.002.patch, 
> YARN-7973.003.patch, YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8115) [UI2] URL data like nodeHTTPAddress must be encoded in UI before using to access NM

2018-04-04 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8115:
--
Summary: [UI2] URL data like nodeHTTPAddress must be encoded in UI before 
using to access NM  (was: [UI2] URL data like nodeHTTPAddress must be encoded 
in UI before using for NM)

> [UI2] URL data like nodeHTTPAddress must be encoded in UI before using to 
> access NM
> ---
>
> Key: YARN-8115
> URL: https://issues.apache.org/jira/browse/YARN-8115
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-8115.001.patch
>
>
> nodeHTTPAddress is picked from RM ,but it should be encoded before processing 
> it from UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8115) [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for NM

2018-04-04 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8115:
--
Summary: [UI2] URL data like nodeHTTPAddress must be encoded in UI before 
using for NM  (was: [UI2] URL data like nodeHTTPAddress must be encoded in UI 
before using for NM/ATS)

> [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for NM
> -
>
> Key: YARN-8115
> URL: https://issues.apache.org/jira/browse/YARN-8115
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-8115.001.patch
>
>
> nodeHTTPAddress is picked from RM ,but it should be encoded before processing 
> it from UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7973) Support ContainerRelaunch for Docker containers

2018-04-04 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7973:
--
Attachment: YARN-7973.004.patch

> Support ContainerRelaunch for Docker containers
> ---
>
> Key: YARN-7973
> URL: https://issues.apache.org/jira/browse/YARN-7973
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7973.001.patch, YARN-7973.002.patch, 
> YARN-7973.003.patch, YARN-7973.004.patch
>
>
> Prior to YARN-5366, {{container-executor}} would remove the Docker container 
> when it exited. The removal is now handled by the 
> {{DockerLinuxContainerRuntime}}. {{ContainerRelaunch}} is intended to reuse 
> the workdir from the previous attempt, and does not call {{cleanupContainer}} 
> prior to {{launchContainer}}. The container ID is reused as well. As a 
> result, the previous Docker container still exists, resulting in an error 
> from Docker indicating the a container by that name already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-04-04 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425736#comment-16425736
 ] 

Eric Payne commented on YARN-4606:
--

bq. could you briefly summary what is the current issue and solution being 
discussed?
[~leftnoteasy], the latest patch ({{YARN-4606.POC.patch}}) changed the behavior 
of the capacity scheduler so that it would never give a container to the second 
app for its AM as long as the first app consumed the entire queue and had 
pending requests, even when the AM used is lower than AM max. I described it in 
more detail 
[above|https://issues.apache.org/jira/browse/YARN-4606?focusedCommentId=16391802=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16391802].

[I suggested that one 
solution|https://issues.apache.org/jira/browse/YARN-4606?focusedCommentId=16396094=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16396094]
 would be to modify the code as follows as long as there is a way to do it in 
an abstract way:
{code:title=AppSchedulingInfo#updatePendingResources}
if( Not Waiting For AM Container
|| (Queue Used AM Resources < Queue Max AM Resources) {
  abstractUsersManager.activateApplication(user, applicationId);
}
{code}

I suggested a way to do that, but it seems a little cumbersome.

So then I started wondering if there was a way to leverage the {{Schedulable 
Apps}} and {{Non-Schedulable Apps}} user info in the 
{{AppSchedulingInfo#updatePendingResources}} code. I looked more closely, 
however, and it is too early within 
{{AppSchedulingInfo#updatePendingResources}} to tell whether or not a new app 
is destined to be schedulable.

So, I think the best suggestion I have is the pseudo-code I posted above.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-4606.1.poc.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8115) [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for NM/ATS

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425732#comment-16425732
 ] 

genericqa commented on YARN-8115:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8115 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12917551/YARN-8115.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux abea1c79cfc8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b779f4f |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20220/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for 
> NM/ATS
> -
>
> Key: YARN-8115
> URL: https://issues.apache.org/jira/browse/YARN-8115
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-8115.001.patch
>
>
> nodeHTTPAddress is picked from RM ,but it should be encoded before processing 
> it from UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-04 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8117:
---
Attachment: YARN-8117-YARN-3409.001.patch

> Fix TestRMWebServicesNodes test failure
> ---
>
> Key: YARN-8117
> URL: https://issues.apache.org/jira/browse/YARN-8117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8117-YARN-3409.001.patch
>
>
> Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8117) Fix TestRMWebServicesNodes test failure

2018-04-04 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-8117:
--

 Summary: Fix TestRMWebServicesNodes test failure
 Key: YARN-8117
 URL: https://issues.apache.org/jira/browse/YARN-8117
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt


Need to fix conflict in TestRMWebServices  due to YARN-7792 and YARN-8092 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8116) Nodemanager fails with NumberFormatException: For input string: ""

2018-04-04 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-8116:


 Summary: Nodemanager fails with NumberFormatException: For input 
string: ""
 Key: YARN-8116
 URL: https://issues.apache.org/jira/browse/YARN-8116
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yesha Vora


Steps followed.
1) Update nodemanager debug delay config
{code}

  yarn.nodemanager.delete.debug-delay-sec
  350
{code}
2) Launch distributed shell application multiple times
{code}
/usr/hdp/current/hadoop-yarn-client/bin/yarn  jar 
hadoop-yarn-applications-distributedshell-*.jar  -shell_command "sleep 120" 
-num_containers 1 -shell_env YARN_CONTAINER_RUNTIME_TYPE=docker -shell_env 
YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=centos/httpd-24-centos7:latest -shell_env 
YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL=true -jar 
hadoop-yarn-applications-distributedshell-*.jar{code}
3) restart NM

Nodemanager fails to start with below error.
{code}

{code:title=NM log}
2018-03-23 21:32:14,437 INFO  monitor.ContainersMonitorImpl 
(ContainersMonitorImpl.java:serviceInit(181)) - ContainersMonitor enabled: true
2018-03-23 21:32:14,439 INFO  logaggregation.LogAggregationService 
(LogAggregationService.java:serviceInit(130)) - rollingMonitorInterval is set 
as 3600. The logs will be aggregated every 3600 seconds
2018-03-23 21:32:14,455 INFO  service.AbstractService 
(AbstractService.java:noteFailure(267)) - Service 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl 
failed in state INITED
java.lang.NumberFormatException: For input string: ""
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:601)
at java.lang.Long.parseLong(Long.java:631)
at 
org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState(NMLeveldbStateStoreService.java:350)
at 
org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainersState(NMLeveldbStateStoreService.java:253)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:365)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:316)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:464)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:899)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:960)
2018-03-23 21:32:14,458 INFO  logaggregation.LogAggregationService 
(LogAggregationService.java:serviceStop(148)) - 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
 waiting for pending aggregation during exit
2018-03-23 21:32:14,460 INFO  service.AbstractService 
(AbstractService.java:noteFailure(267)) - Service NodeManager failed in state 
INITED
java.lang.NumberFormatException: For input string: ""
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:601)
at java.lang.Long.parseLong(Long.java:631)
at 
org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainerState(NMLeveldbStateStoreService.java:350)
at 
org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.loadContainersState(NMLeveldbStateStoreService.java:253)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:365)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:316)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:464)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:899)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:960)
2018-03-23 21:32:14,463 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:stop(210)) - Stopping NodeManager metrics system...
2018-03-23 21:32:14,464 INFO  impl.MetricsSinkAdapter 
(MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread 

[jira] [Commented] (YARN-7875) AttributeStore for store and recover attributes

2018-04-04 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425706#comment-16425706
 ] 

Sunil G commented on YARN-7875:
---

Yes. Pls hold on for a while

> AttributeStore for store and recover attributes
> ---
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch, 
> YARN-7875-YARN-3409.002.patch, YARN-7875-YARN-3409.003.patch, 
> YARN-7875-YARN-3409.004.patch, YARN-7875-YARN-3409.005.patch, 
> YARN-7875-YARN-3409.006.patch, YARN-7875-YARN-3409.007.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7875) AttributeStore for store and recover attributes

2018-04-04 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425689#comment-16425689
 ] 

Bibin A Chundatt edited comment on YARN-7875 at 4/4/18 3:17 PM:


Will handle as part of separate JIRA. As per the last comment from [~sunilg] 
before rebase this Jira was good to go in..

[~sunilg] Any more comments??


was (Author: bibinchundatt):
Will handle as part of separate JIRA. As per the last comment from [~sunilg] 
before rebase this Jira was good to go in..
If not comments i will go ahead and commit this.

[~sunilg] Any more comments??

> AttributeStore for store and recover attributes
> ---
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch, 
> YARN-7875-YARN-3409.002.patch, YARN-7875-YARN-3409.003.patch, 
> YARN-7875-YARN-3409.004.patch, YARN-7875-YARN-3409.005.patch, 
> YARN-7875-YARN-3409.006.patch, YARN-7875-YARN-3409.007.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7875) AttributeStore for store and recover attributes

2018-04-04 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425689#comment-16425689
 ] 

Bibin A Chundatt commented on YARN-7875:


Will handle as part of separate JIRA. As per the last comment from [~sunilg] 
before rebase this Jira was good to go in..
If not comments i will go ahead and commit this.

[~sunilg] Any more comments??

> AttributeStore for store and recover attributes
> ---
>
> Key: YARN-7875
> URL: https://issues.apache.org/jira/browse/YARN-7875
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-7875-WIP.patch, YARN-7875-YARN-3409.001.patch, 
> YARN-7875-YARN-3409.002.patch, YARN-7875-YARN-3409.003.patch, 
> YARN-7875-YARN-3409.004.patch, YARN-7875-YARN-3409.005.patch, 
> YARN-7875-YARN-3409.006.patch, YARN-7875-YARN-3409.007.patch
>
>
> Similar to NodeLabelStore need to support NodeAttributeStore for persisting 
> attributes mapping to Nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7996) Allow user supplied Docker client configurations with YARN native services

2018-04-04 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425678#comment-16425678
 ] 

genericqa commented on YARN-7996:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
38s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  5m 
34s{color} | {color:red} hadoop-yarn in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
23s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
51s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 33s{color} 
| {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.service.TestApiServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2018-04-04 Thread Kanwaljeet Sachdev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425670#comment-16425670
 ] 

Kanwaljeet Sachdev commented on YARN-7088:
--

[~sunilg], [~haibo.chen], I have uploaded the patch that has passed the tests. 
The current patch addresses the following:
 # Add launchTime, schedulingWaitTime and totalRunTime
 # I changed the original patch slightly and now launchTime update happens via 
the event that is sent to the RMApp state machine.
 # Has the UI change(for old UI only)

I will create separate tickets/patches for the below for launchTime to be added 
to the new UI and ATS
 # Writing to the ATSV1
 # Writing to ATSV2
 # New UI change
 # ATSV1.5??

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7667) Docker Stop grace period should be configurable

2018-04-04 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425665#comment-16425665
 ] 

Eric Badger commented on YARN-7667:
---

Attaching patch to make docker stop grace period configurable

> Docker Stop grace period should be configurable
> ---
>
> Key: YARN-7667
> URL: https://issues.apache.org/jira/browse/YARN-7667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7667.001.patch
>
>
> {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never 
> called. So, the stop uses the 10 second default grace period from docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7667) Docker Stop grace period should be configurable

2018-04-04 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-7667:
--
Attachment: YARN-7667.001.patch

> Docker Stop grace period should be configurable
> ---
>
> Key: YARN-7667
> URL: https://issues.apache.org/jira/browse/YARN-7667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-7667.001.patch
>
>
> {{DockerStopCommand}} has a {{setGracePeriod}} method, but it is never 
> called. So, the stop uses the 10 second default grace period from docker



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2018-04-04 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425644#comment-16425644
 ] 

Jim Brennan commented on YARN-6830:
---

Solution proposed by [~aw] for mapreduce variables is being addressed in 
MAPREDUCE-7069.

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-6830.001.patch, YARN-6830.002.patch, 
> YARN-6830.003.patch, YARN-6830.004.patch
>
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8115) [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for NM/ATS

2018-04-04 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425646#comment-16425646
 ] 

Sunil G commented on YARN-8115:
---

Thanks [~Sreenath], patch looks good. pending jenkins.

> [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for 
> NM/ATS
> -
>
> Key: YARN-8115
> URL: https://issues.apache.org/jira/browse/YARN-8115
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-8115.001.patch
>
>
> nodeHTTPAddress is picked from RM ,but it should be encoded before processing 
> it from UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8115) [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for NM/ATS

2018-04-04 Thread Sreenath Somarajapuram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425637#comment-16425637
 ] 

Sreenath Somarajapuram commented on YARN-8115:
--

[~sunilg] Please review the patch.

> [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for 
> NM/ATS
> -
>
> Key: YARN-8115
> URL: https://issues.apache.org/jira/browse/YARN-8115
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-8115.001.patch
>
>
> nodeHTTPAddress is picked from RM ,but it should be encoded before processing 
> it from UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8115) [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for NM/ATS

2018-04-04 Thread Sreenath Somarajapuram (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreenath Somarajapuram updated YARN-8115:
-
Attachment: YARN-8115.001.patch

> [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for 
> NM/ATS
> -
>
> Key: YARN-8115
> URL: https://issues.apache.org/jira/browse/YARN-8115
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Attachments: YARN-8115.001.patch
>
>
> nodeHTTPAddress is picked from RM ,but it should be encoded before processing 
> it from UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8115) [UI2] URL data like nodeHTTPAddress must be encoded in UI before using for NM/ATS

2018-04-04 Thread Sunil G (JIRA)
Sunil G created YARN-8115:
-

 Summary: [UI2] URL data like nodeHTTPAddress must be encoded in UI 
before using for NM/ATS
 Key: YARN-8115
 URL: https://issues.apache.org/jira/browse/YARN-8115
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Sunil G
Assignee: Sreenath Somarajapuram


nodeHTTPAddress is picked from RM ,but it should be encoded before processing 
it from UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8107) NPE when incorrect format is used in ATSv2 filter attributes

2018-04-04 Thread Charan Hebri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425583#comment-16425583
 ] 

Charan Hebri commented on YARN-8107:


[~rohithsharma] the query request is in the log attached above,
{noformat}
/ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq{noformat}
Here the attribute value 'UIDeq' doesn't follow the convention of  
  used for filter attributes. The NPE isn't specifically for the 
apps for a flow run API though. It can be seen in other APIs where filters are 
supported.

 

> NPE when incorrect format is used in ATSv2 filter attributes
> 
>
> Key: YARN-8107
> URL: https://issues.apache.org/jira/browse/YARN-8107
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Charan Hebri
>Assignee: Rohith Sharma K S
>Priority: Major
>
> Using an incorrect format for infofilters, conffilters and metricfilters 
> throws a NPE with no clear message to the caller. This should be tagged as a 
> 400 Bad Request with an informative message. Below is the timeline reader log.
> {noformat}
> 2018-04-02 06:44:10,451 INFO  reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(173)) - Processed URL 
> /ws/v2/timeline/users/hrt_qa/flows/flow4/runs/1/apps?infofilters=UIDeq but 
> encountered exception (Took 0 ms.)
> 2018-04-02 06:44:10,451 ERROR reader.TimelineReaderWebServices 
> (TimelineReaderWebServices.java:handleException(188)) - Error while 
> processing REST request
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterUtils.createHBaseFilterList(TimelineFilterUtils.java:276)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.ApplicationEntityReader.constructFilterListBasedOnFilters(ApplicationEntityReader.java:126)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.createFilterList(TimelineEntityReader.java:157)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:277)
> at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl.getEntities(HBaseTimelineReaderImpl.java:87)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderManager.getEntities(TimelineReaderManager.java:143)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getEntities(TimelineReaderWebServices.java:605)
> at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderWebServices.getFlowRunApps(TimelineReaderWebServices.java:1962)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
> at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
> at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
> at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
> at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
> at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
> at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
> at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> at 
> 

[jira] [Commented] (YARN-7221) Add security check for privileged docker container

2018-04-04 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425548#comment-16425548
 ] 

Jason Lowe commented on YARN-7221:
--

Thanks for updating the patch!  The unit test failure is unrelated and tracked 
by YARN-7700.

There's one more issue I just noticed.  When we fork-n-exec /bin/sudo, the 
parent process is not checking the result of the wait() call.  Unfortunately if 
wait fails (e.g.: EINTR) and statval does not end up being set then the parent 
will think that the command succeeded because WIFEXITED(0) == 1 and 
WEXITSTATUS(0) == 0.  The parent really should be calling waitpid() with the 
pid returned by the fork and the result code from that waitpid() call needs to 
be checked before examining the statval value.  My apologies for missing this 
in the earlier reviews.


> Add security check for privileged docker container
> --
>
> Key: YARN-7221
> URL: https://issues.apache.org/jira/browse/YARN-7221
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-7221.001.patch, YARN-7221.002.patch, 
> YARN-7221.003.patch, YARN-7221.004.patch, YARN-7221.005.patch, 
> YARN-7221.006.patch, YARN-7221.007.patch, YARN-7221.008.patch, 
> YARN-7221.009.patch, YARN-7221.010.patch, YARN-7221.011.patch, 
> YARN-7221.012.patch, YARN-7221.013.patch, YARN-7221.014.patch, 
> YARN-7221.015.patch, YARN-7221.016.patch
>
>
> When a docker is running with privileges, majority of the use case is to have 
> some program running with root then drop privileges to another user.  i.e. 
> httpd to start with privileged and bind to port 80, then drop privileges to 
> www user.  
> # We should add security check for submitting users, to verify they have 
> "sudo" access to run privileged container.  
> # We should remove --user=uid:gid for privileged containers.  
>  
> Docker can be launched with --privileged=true, and --user=uid:gid flag.  With 
> this parameter combinations, user will not have access to become root user.  
> All docker exec command will be drop to uid:gid user to run instead of 
> granting privileges.  User can gain root privileges if container file system 
> contains files that give user extra power, but this type of image is 
> considered as dangerous.  Non-privileged user can launch container with 
> special bits to acquire same level of root power.  Hence, we lose control of 
> which image should be run with --privileges, and who have sudo rights to use 
> privileged container images.  As the result, we should check for sudo access 
> then decide to parameterize --privileged=true OR --user=uid:gid.  This will 
> avoid leading developer down the wrong path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7913) Improve error handling when application recovery fails with exception

2018-04-04 Thread Gergo Repas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425543#comment-16425543
 ] 

Gergo Repas commented on YARN-7913:
---

[~oshevchenko] I see, that's a good point that ACL changes may cause similar 
issues.

In this ticket my proposal would be to improve error-handling in general. I'd 
propose to try to recover all applications in a passive-to-active RM 
transition-attempt even with the presence of exceptions (logging all 
exceptions, rethrowing the last one), and the async event processing would take 
care of putting the application attempt to rejected state. This way we would 
have only a couple of passive-to-active RM transition-attempts (depending on 
how long the async event processing takes), instead of N+1 passive-to-active 
attempts, where N is the number of rejected applications. This is to improve 
the handling of any unforeseen exceptions.

What do you think?

> Improve error handling when application recovery fails with exception
> -
>
> Key: YARN-7913
> URL: https://issues.apache.org/jira/browse/YARN-7913
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.0.0
>Reporter: Gergo Repas
>Assignee: Gergo Repas
>Priority: Major
> Attachments: YARN-7913.000.poc.patch
>
>
> There are edge cases when the application recovery fails with an exception.
> Example failure scenario:
>  * setup: a queue is a leaf queue in the primary RM's config and the same 
> queue is a parent queue in the secondary RM's config.
>  * When failover happens with this setup, the recovery will fail for 
> applications on this queue, and an APP_REJECTED event will be dispatched to 
> the async dispatcher. On the same thread (that handles the recovery), a 
> NullPointerException is thrown when the applicationAttempt is tried to be 
> recovered 
> (https://github.com/apache/hadoop/blob/55066cc53dc22b68f9ca55a0029741d6c846be0a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java#L494).
>  I don't see a good way to avoid the NPE in this scenario, because when the 
> NPE occurs the APP_REJECTED has not been processed yet, and we don't know 
> that the application recovery failed.
> Currently the first exception will abort the recovery, and if there are X 
> applications, there will be ~X passive -> active RM transition attempts - the 
> passive -> active RM transition will only succeed when the last APP_REJECTED 
> event is processed on the async dispatcher thread.
> _The point of this ticket is to improve the error handling and reduce the 
> number of passive -> active RM transition attempts (solving the above 
> described failure scenario isn't in scope)._



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer

2018-04-04 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425525#comment-16425525
 ] 

BELUGA BEHR commented on YARN-7962:
---

[~wilfreds] {{try...finally}} is a best practice.

bq. It is recommended practice to always immediately follow a call to lock with 
a try block, most typically in a before/after construction

https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantLock.html

Please consider patch version 3 for inclusion into the project.

> Race Condition When Stopping DelegationTokenRenewer
> ---
>
> Key: YARN-7962
> URL: https://issues.apache.org/jira/browse/YARN-7962
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7962.1.patch, YARN-7962.2.patch, YARN-7962.3.patch, 
> YARN-7962.4.patch
>
>
> [https://github.com/apache/hadoop/blob/69fa81679f59378fd19a2c65db8019393d7c05a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java]
> {code:java}
>   private ThreadPoolExecutor renewerService;
>   private void processDelegationTokenRenewerEvent(
>   DelegationTokenRenewerEvent evt) {
> serviceStateLock.readLock().lock();
> try {
>   if (isServiceStarted) {
> renewerService.execute(new DelegationTokenRenewerRunnable(evt));
>   } else {
> pendingEventQueue.add(evt);
>   }
> } finally {
>   serviceStateLock.readLock().unlock();
> }
>   }
>   @Override
>   protected void serviceStop() {
> if (renewalTimer != null) {
>   renewalTimer.cancel();
> }
> appTokens.clear();
> allTokens.clear();
> this.renewerService.shutdown();
> {code}
> {code:java}
> 2018-02-21 11:18:16,253  FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable@39bddaf2
>  rejected from java.util.concurrent.ThreadPoolExecutor@5f71637b[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 15487]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.processDelegationTokenRenewerEvent(DelegationTokenRenewer.java:196)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.applicationFinished(DelegationTokenRenewer.java:734)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.finishApplication(RMAppManager.java:199)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:424)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:65)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:177)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> What I think is going on here is that the {{serviceStop}} method is not 
> setting the {{isServiceStarted}} flag to 'false'.
> Please update so that the {{serviceStop}} method grabs the 
> {{serviceStateLock}} and sets {{isServiceStarted}} to _false_, before 
> shutting down the {{renewerService}} thread pool, to avoid this condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer

2018-04-04 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425515#comment-16425515
 ] 

Wilfred Spiegelenburg edited comment on YARN-7962 at 4/4/18 1:35 PM:
-

Sorry I made a typo in my comment [~belugabehr]: the locking is correct and 
should stay. The comment should have been about the {{try...finally}} that you 
introduced.
* Locking should stay in the serviceStart method (no change needed in that 
method at all)
* Locking should be added in the serviceStop method around the state change 
like this:
{code}
serviceStateLock.writeLock().lock();
isServiceStarted = false;
serviceStateLock.writeLock().unlock();
{code}
* Neither methods should have a {{try...finally}}

Sorry for that


was (Author: wilfreds):
Sorry I made a typo in my comment [~belugabehr]: the locking is correct and 
should stay. The comment should have been about the {{try...finally}} that you 
introduced.
* Locking should stay in the serviceStart method (no change needed in that 
method at all)
* Locking should be added in the serviceStop method around the state change 
like this:
{code}
serviceStateLock.writeLock().lock();
isServiceStarted = true;
serviceStateLock.writeLock().unlock();
{code}
* Neither methods should have a {{try...finally}}

Sorry for that

> Race Condition When Stopping DelegationTokenRenewer
> ---
>
> Key: YARN-7962
> URL: https://issues.apache.org/jira/browse/YARN-7962
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7962.1.patch, YARN-7962.2.patch, YARN-7962.3.patch, 
> YARN-7962.4.patch
>
>
> [https://github.com/apache/hadoop/blob/69fa81679f59378fd19a2c65db8019393d7c05a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java]
> {code:java}
>   private ThreadPoolExecutor renewerService;
>   private void processDelegationTokenRenewerEvent(
>   DelegationTokenRenewerEvent evt) {
> serviceStateLock.readLock().lock();
> try {
>   if (isServiceStarted) {
> renewerService.execute(new DelegationTokenRenewerRunnable(evt));
>   } else {
> pendingEventQueue.add(evt);
>   }
> } finally {
>   serviceStateLock.readLock().unlock();
> }
>   }
>   @Override
>   protected void serviceStop() {
> if (renewalTimer != null) {
>   renewalTimer.cancel();
> }
> appTokens.clear();
> allTokens.clear();
> this.renewerService.shutdown();
> {code}
> {code:java}
> 2018-02-21 11:18:16,253  FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable@39bddaf2
>  rejected from java.util.concurrent.ThreadPoolExecutor@5f71637b[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 15487]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.processDelegationTokenRenewerEvent(DelegationTokenRenewer.java:196)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.applicationFinished(DelegationTokenRenewer.java:734)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.finishApplication(RMAppManager.java:199)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:424)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:65)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:177)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> What I think is going on here is that the {{serviceStop}} method is not 
> setting the {{isServiceStarted}} flag to 'false'.
> Please update so that the {{serviceStop}} method grabs the 
> {{serviceStateLock}} and sets {{isServiceStarted}} to _false_, before 
> shutting down the {{renewerService}} thread pool, to avoid this condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, 

  1   2   >