[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior

2018-02-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360380#comment-16360380
 ] 

genericqa commented on YARN-7292:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m  
6s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  2m  6s{color} | 
{color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m  6s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 297 unchanged - 11 fixed = 299 total (was 308) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
40s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
5s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 37s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m  5s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF 

[jira] [Commented] (YARN-7905) Parent directory permission incorrect during public localization

2018-02-11 Thread Bilwa S T (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360361#comment-16360361
 ] 

Bilwa S T commented on YARN-7905:
-

Thanks [~bibinchundatt] for reporting this issue. I have attached the initial 
patch. Please review.

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905.001.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7905) Parent directory permission incorrect during public localization

2018-02-11 Thread Bilwa S T (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated YARN-7905:

Attachment: YARN-7905.001.patch

> Parent directory permission incorrect during public localization 
> -
>
> Key: YARN-7905
> URL: https://issues.apache.org/jira/browse/YARN-7905
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Critical
> Attachments: YARN-7905.001.patch
>
>
> Similar to YARN-6708 during public localization also we have to take care for 
> parent directory if the umask is 027 during node manager start up.
> /filecache/0/200
> the directory permission of /filecache/0 is 750. Which cause 
> application failure 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-11 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360356#comment-16360356
 ] 

Sunil G commented on YARN-6858:
---

Thanks [~Naganarasimha] for the patch.

Few comments.

1. {{public abstract Map 
getAttributesToNodes(Set attributes);}} returns a map of 
attributes to set of nodes. This is something we need to check a bit as 
existing api's from node labels found to be a bit tough to use. (recent Ui 
works to use REST endpoint and resource types work.)  Since this 
*getAttributesToNodes* is not used in this patch, could be remove this from the 
scope of patch and take to a separate Jira to discuss api more concretely.

2. I found *compare* a bit confusing as it takes the operation also. Many 
operations are mutually exclusive in nature and it is contradicting with the 
default compare nature of an object.

3. Rename removeNodeFromLabels to removeNodeFromAttributes

4. Is AsyncDispatcher used for store?

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7919) Split timelineservice-hbase module to make YARN-7346 easier

2018-02-11 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360354#comment-16360354
 ] 

Rohith Sharma K S commented on YARN-7919:
-

thanks [~haibochen] for quick patch. I got an error while applying patch and 
resolved it. But compilation failed to me in branch-3.1!

One thing I observed is the module *hadoop-yarn-server-timelineservice-hbase* 
is changed packaging value to *pom* which indicate, there will be no jar 
generated for this module. It contains only submodules. But I see there are 
some java source files still remain in this package which can be removed. I 
have two doubts on packaging existing class names
# which module holds TimelineSchemaCreator class?
# which module holds FlowRunCoprocessor class?

> Split timelineservice-hbase module to make YARN-7346 easier
> ---
>
> Key: YARN-7919
> URL: https://issues.apache.org/jira/browse/YARN-7919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineservice
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-7919.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7835) [Atsv2] Race condition in NM while publishing events if second attempt launched on same node

2018-02-11 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360320#comment-16360320
 ] 

Rohith Sharma K S commented on YARN-7835:
-

[~haibochen] would you review latest patch updated as per your comments? 

> [Atsv2] Race condition in NM while publishing events if second attempt 
> launched on same node
> 
>
> Key: YARN-7835
> URL: https://issues.apache.org/jira/browse/YARN-7835
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-7835.001.patch, YARN-7835.002.patch
>
>
> It is observed race condition that if master container is killed for some 
> reason and launched on same node then NMTimelinePublisher doesn't add 
> timelineClient. But once completed container for 1st attempt has come then 
> NMTimelinePublisher removes the timelineClient. 
>  It causes all subsequent event publishing from different client fails to 
> publish with exception Application is not found. !



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7292) Revisit Resource Profile Behavior

2018-02-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360319#comment-16360319
 ] 

Wangda Tan commented on YARN-7292:
--

Rebased to latest trunk (005)

> Revisit Resource Profile Behavior
> -
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7292) Revisit Resource Profile Behavior

2018-02-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7292:
-
Attachment: YARN-7292.005.patch

> Revisit Resource Profile Behavior
> -
>
> Key: YARN-7292
> URL: https://issues.apache.org/jira/browse/YARN-7292
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7292.002.patch, YARN-7292.003.patch, 
> YARN-7292.004.patch, YARN-7292.005.patch, YARN-7292.wip.001.patch
>
>
> Had discussions with [~templedf], [~vvasudev], [~sunilg] offline. There're a 
> couple of resource profile related behaviors might need to be updated:
> 1) Configure resource profile in server side or client side: 
> Currently resource profile can be only configured centrally:
> - Advantages:
> A given resource profile has a the same meaning in the cluster. It won’t 
> change when we run different apps in different configurations. A job can run 
> under Amazon’s G2.8X can also run on YARN with G2.8X profile. A side benefit 
> is YARN scheduler can potentially do better bin packing.
> - Disadvantages: 
> Hard for applications to add their own resource profiles. 
> 2) Do we really need mandatory resource profiles such as 
> minimum/maximum/default? 
> 3) Should we send resource profile name inside ResourceRequest, or should 
> client/AM translate it to resource and set it to the existing resource 
> fields? 
> 4) Related to above, should we allow resource overrides or client/AM should 
> send final resource to RM?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7920) Cleanup configuration of PlacementConstraints

2018-02-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7920:
-
Attachment: YARN-7920.001.patch

> Cleanup configuration of PlacementConstraints
> -
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7920.001.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints

2018-02-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360317#comment-16360317
 ] 

Wangda Tan commented on YARN-7920:
--

Attached ver.1 patch, [~kkaranasos]/[~asuresh]/[~sunilg] please help with 
review.

> Cleanup configuration of PlacementConstraints
> -
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7920.001.patch
>
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-11 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360283#comment-16360283
 ] 

Weiwei Yang edited comment on YARN-7921 at 2/12/18 3:57 AM:


Thanks [~kkaranasos] for the quick feedback. I agree that to follow same syntax 
as in the doc, but want to double check if we are referring to the same doc. I 
am referring to what was done for DS like following:
{code:java}
NOTIN,NODE,zk

IN,RACK,zk

CARDINALITY,NODE,hbase,1,3
{code}
I have implemented a parser {{PlacementConstraintParser}} in YARN-7838 which is 
able to parse such expressions to a corresponding java instance, e.g
{code:java}
// 1) parse a string expression to an AbstractConstraint
AbstractConstraint constraint = 
PlacementConstraintParser.parseExpression("NOTIN,NODE,zk");

// 2) transform an AbstractConstraint to a string expression
constraint.toString();
// this returns "NOTIN,NODE,zk"
{code}
what I was trying to do with this task is to implement 2). Which will be done 
by implementing {{AbstractConstraint#toString}} methods.

Please share your thoughts on this.

Thanks.

 


was (Author: cheersyang):
Thanks [~kkaranasos] for the quick feedback. I agree that to follow same syntax 
as in the doc, but want to double check if we are referring to the same doc. I 
am referring to what was done for DS like following:
{code:java}
NOTIN,NODE,zk

IN,RACK,zk

CARDINALITY,NODE,hbase,1,3
{code}
I have implemented a parser {{PlacementConstraintParser}} which is able to 
parse such expressions to a corresponding java instance, e.g
{code:java}
// 1) parse a string expression to an AbstractConstraint

AbstractConstraint constraint = 
PlacementConstraintParser.parseExpression("NOTIN,NODE,zk");

// 2) transform an AbstractConstraint to a string expression

constraint.toString();

// this returns "NOTIN,NODE,zk"

{code}
what I was trying to do with this task is to implement 2). Which will be done 
by implementing {{AbstractConstraint#toString}} methods.

Please share your thoughts on this.

Thanks.

 

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-11 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360283#comment-16360283
 ] 

Weiwei Yang commented on YARN-7921:
---

Thanks [~kkaranasos] for the quick feedback. I agree that to follow same syntax 
as in the doc, but want to double check if we are referring to the same doc. I 
am referring to what was done for DS like following:

{code}

NOTIN,NODE,zk

IN,RACK,zk

CARDINALITY,NODE,hbase,1,3

 

{code}

I have implemented a parser \{{PlacementConstraintParser}} which is able to 
parse such expressions to a corresponding java instance, e.g

{code}

// 1) parse a string expression to an AbstractConstraint

AbstractConstraint constraint = 
PlacementConstraintParser.parseExpression("NOTIN,NODE,zk");

// 2) transform an AbstractConstraint to a string expression

constraint.toString();

// this returns "NOTIN,NODE,zk"

{code}

what I was trying to do with this task is to implement 2). Which will be done 
by implementing \{{AbstractConstraint#toString}} methods.

Please share your thoughts on this.

Thanks.

 

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-11 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360283#comment-16360283
 ] 

Weiwei Yang edited comment on YARN-7921 at 2/12/18 3:56 AM:


Thanks [~kkaranasos] for the quick feedback. I agree that to follow same syntax 
as in the doc, but want to double check if we are referring to the same doc. I 
am referring to what was done for DS like following:
{code:java}
NOTIN,NODE,zk

IN,RACK,zk

CARDINALITY,NODE,hbase,1,3
{code}
I have implemented a parser {{PlacementConstraintParser}} which is able to 
parse such expressions to a corresponding java instance, e.g
{code:java}
// 1) parse a string expression to an AbstractConstraint

AbstractConstraint constraint = 
PlacementConstraintParser.parseExpression("NOTIN,NODE,zk");

// 2) transform an AbstractConstraint to a string expression

constraint.toString();

// this returns "NOTIN,NODE,zk"

{code}
what I was trying to do with this task is to implement 2). Which will be done 
by implementing {{AbstractConstraint#toString}} methods.

Please share your thoughts on this.

Thanks.

 


was (Author: cheersyang):
Thanks [~kkaranasos] for the quick feedback. I agree that to follow same syntax 
as in the doc, but want to double check if we are referring to the same doc. I 
am referring to what was done for DS like following:

{code}

NOTIN,NODE,zk

IN,RACK,zk

CARDINALITY,NODE,hbase,1,3

 

{code}

I have implemented a parser \{{PlacementConstraintParser}} which is able to 
parse such expressions to a corresponding java instance, e.g

{code}

// 1) parse a string expression to an AbstractConstraint

AbstractConstraint constraint = 
PlacementConstraintParser.parseExpression("NOTIN,NODE,zk");

// 2) transform an AbstractConstraint to a string expression

constraint.toString();

// this returns "NOTIN,NODE,zk"

{code}

what I was trying to do with this task is to implement 2). Which will be done 
by implementing \{{AbstractConstraint#toString}} methods.

Please share your thoughts on this.

Thanks.

 

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints

2018-02-11 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360265#comment-16360265
 ] 

Wangda Tan commented on YARN-7920:
--

Thanks [~kkaranasos], I would still prefer to "scheduler", otherwise it will be 
a duplicated config to yarn.resourcemanager.scheduler, and once FS want to 
support the feature, we need to add a new option and document, etc. We can add 
a check in the processor to throw an exception if the configured scheduler is 
not CS, sounds like a plan?

> Cleanup configuration of PlacementConstraints
> -
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7920) Cleanup configuration of PlacementConstraints

2018-02-11 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360261#comment-16360261
 ] 

Konstantinos Karanasos commented on YARN-7920:
--

Hi [~leftnoteasy], +1 for simplifying the configuration. I think it is a good 
idea to use just one conf.

I would call the scheduler choice "capacity-scheduler", given that it would not 
work for the fair scheduler.

> Cleanup configuration of PlacementConstraints
> -
>
> Key: YARN-7920
> URL: https://issues.apache.org/jira/browse/YARN-7920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
>
> Currently it is very confusing to have the two configs in two different files 
> (yarn-site.xml and capacity-scheduler.xml). 
>  
> Maybe a better approach is: we can delete the scheduling-request.allowed in 
> CS, and update placement-constraints configs in yarn-site.xml a bit: 
>  
> - Remove placement-constraints.enabled, and add a new 
> placement-constraints.handler, by default is none, and other acceptable 
> values are a. external-processor (since algorithm is too generic to me), b. 
> scheduler. 
> - And add a new PlacementProcessor just to pass SchedulingRequest to 
> scheduler without any modifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-11 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360255#comment-16360255
 ] 

Konstantinos Karanasos commented on YARN-7921:
--

Hi [~cheersyang], I think this will be certainly useful.

We can indeed use the same syntax we described in the documentation. Also, you 
just need to support SingleConstraint and then the combined constraints. You 
don't need a special case for the Cardinality and Target constraints, as you 
can use the transformer to transform those to SingleConstraints.

Also, let's try to use the visitor pattern that we had created for the 
transformers too.

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7739) DefaultAMSProcessor should properly check customized resource types against minimum/maximum allocation

2018-02-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7739:
-
Fix Version/s: 3.1.0

> DefaultAMSProcessor should properly check customized resource types against 
> minimum/maximum allocation
> --
>
> Key: YARN-7739
> URL: https://issues.apache.org/jira/browse/YARN-7739
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Fix For: 3.1.0
>
> Attachments: YARN-7339.002.patch, YARN-7739.001.patch
>
>
> Currently, YARN RM reject requested resource if memory or vcores are less 
> than 0 or greater than maximum allocation. We should run the check for 
> customized resource types as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7739) DefaultAMSProcessor should properly check customized resource types against minimum/maximum allocation

2018-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360240#comment-16360240
 ] 

Hudson commented on YARN-7739:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13643 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13643/])
YARN-7739. DefaultAMSProcessor should properly check customized resource 
(wangda: rev d02e42cee4a08a47ed2835f7a4a100daaa95833f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationMasterService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java


> DefaultAMSProcessor should properly check customized resource types against 
> minimum/maximum allocation
> --
>
> Key: YARN-7739
> URL: https://issues.apache.org/jira/browse/YARN-7739
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-7339.002.patch, YARN-7739.001.patch
>
>
> Currently, YARN RM reject requested resource if memory or vcores are less 
> than 0 or greater than maximum allocation. We should run the check for 
> customized resource types as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5848) Remove unnecessary public/crossdomain.xml from YARN UIv2 sub project

2018-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360238#comment-16360238
 ] 

Hudson commented on YARN-5848:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13643 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13643/])
YARN-5848. Remove unnecessary public/crossdomain.xml from YARN UIv2 sub 
(wangda: rev 789a185c16351d2343e075413a50eb3e5849cc5f)
* (delete) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml


> Remove unnecessary public/crossdomain.xml from YARN UIv2 sub project
> 
>
> Key: YARN-5848
> URL: https://issues.apache.org/jira/browse/YARN-5848
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha2, 3.1.0
>Reporter: Allen Wittenauer
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-5848.001.patch
>
>
> crossdomain.xml should really have an ASF header in it and be in the src 
> directory somewhere.  There's zero reason for it to have RAT exception given 
> that comments are possible in xml files.  It's also not in a standard maven 
> location, which should really be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7697) NM goes down with OOM due to leak in log-aggregation

2018-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360239#comment-16360239
 ] 

Hudson commented on YARN-7697:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13643 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13643/])
YARN-7697. NM goes down with OOM due to leak in log-aggregation. (Xuan (wangda: 
rev d4c98579e36df7eeb788352d7b76cd2c7448c511)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/LogAggregationFileController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/IndexedFileAggregatedLogsBlock.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/TestLogAggregationIndexFileController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/ifile/LogAggregationIndexedFileController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/tfile/LogAggregationTFileController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/filecontroller/TestLogAggregationFileControllerFactory.java


> NM goes down with OOM due to leak in log-aggregation
> 
>
> Key: YARN-7697
> URL: https://issues.apache.org/jira/browse/YARN-7697
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: Santhosh B Gowda
>Assignee: Xuan Gong
>Priority: Blocker
> Attachments: YARN-7697.1.patch, YARN-7697.2.patch, YARN-7697.3.patch
>
>
> 2017-12-29 01:43:50,601 FATAL yarn.YarnUncaughtExceptionHandler 
> (YarnUncaughtExceptionHandler.java:uncaughtException(51)) - Thread 
> Thread[LogAggregationService #0,5,main] threw an Error.  Shutting down now...
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadIndexedLogsMeta(LogAggregationIndexedFileController.java:823)
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.loadIndexedLogsMeta(LogAggregationIndexedFileController.java:840)
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.initializeWriterInRolling(LogAggregationIndexedFileController.java:293)
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.access$600(LogAggregationIndexedFileController.java:98)
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController$1.run(LogAggregationIndexedFileController.java:216)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
> at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.initializeWriter(LogAggregationIndexedFileController.java:197)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:205)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:312)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:284)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$1.run(LogAggregationService.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2017-12-29 01:43:50,601 INFO  application.ApplicationImpl 
> (ApplicationImpl.java:handle(464)) - Application ap



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7906) Fix mvn site fails with error: Multiple sources of package comments found for package "o.a.h.y.client.api.impl"

2018-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360237#comment-16360237
 ] 

Hudson commented on YARN-7906:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13643 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13643/])
YARN-7906. Fix mvn site fails with error: Multiple sources of package (wangda: 
rev e795833d8c1981cab85a10b4e516cd0c5423c792)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java


> Fix mvn site fails with error: Multiple sources of package comments found for 
> package "o.a.h.y.client.api.impl"
> ---
>
> Key: YARN-7906
> URL: https://issues.apache.org/jira/browse/YARN-7906
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: YARN-7906.001.patch
>
>
> {{mvn site}} fails on trunk.
> {noformat}
> [ERROR] javadoc: warning - Multiple sources of package comments found for 
> package "org.apache.hadoop.yarn.client.api.impl"
> [ERROR] 
> /home/travis/build/aajisaka/hadoop-document/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java:21:
>  error: package org.apache.hadoop.yarn.api.resource has already been annotated
> [ERROR] @InterfaceAudience.Private
> [ERROR] ^
> [ERROR] java.lang.AssertionError
> [ERROR] at com.sun.tools.javac.util.Assert.error(Assert.java:126)
> [ERROR] at com.sun.tools.javac.util.Assert.check(Assert.java:45)
> [ERROR] at 
> com.sun.tools.javac.code.SymbolMetadata.setDeclarationAttributesWithCompletion(SymbolMetadata.java:161)
> [ERROR] at 
> com.sun.tools.javac.code.Symbol.setDeclarationAttributesWithCompletion(Symbol.java:215)
> [ERROR] at 
> com.sun.tools.javac.comp.MemberEnter.actualEnterAnnotations(MemberEnter.java:952)
> [ERROR] at 
> com.sun.tools.javac.comp.MemberEnter.access$600(MemberEnter.java:64)
> [ERROR] at com.sun.tools.javac.comp.MemberEnter$5.run(MemberEnter.java:876)
> [ERROR] at com.sun.tools.javac.comp.Annotate.flush(Annotate.java:143)
> [ERROR] at com.sun.tools.javac.comp.Annotate.enterDone(Annotate.java:129)
> [ERROR] at com.sun.tools.javac.comp.Enter.complete(Enter.java:512)
> [ERROR] at com.sun.tools.javac.comp.Enter.main(Enter.java:471)
> [ERROR] at com.sun.tools.javadoc.JavadocEnter.main(JavadocEnter.java:78)
> [ERROR] at 
> com.sun.tools.javadoc.JavadocTool.getRootDocImpl(JavadocTool.java:186)
> [ERROR] at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:346)
> [ERROR] at com.sun.tools.javadoc.Start.begin(Start.java:219)
> [ERROR] at com.sun.tools.javadoc.Start.begin(Start.java:205)
> [ERROR] at com.sun.tools.javadoc.Main.execute(Main.java:64)
> [ERROR] at com.sun.tools.javadoc.Main.main(Main.java:54)
> [ERROR] javadoc: error - fatal error
> {noformat}
> [https://travis-ci.org/aajisaka/hadoop-document/builds/338833122|https://travis-ci.org/aajisaka/hadoop-document/builds/338833122]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-11 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360222#comment-16360222
 ] 

Weiwei Yang edited comment on YARN-6858 at 2/12/18 2:47 AM:


Hi [~Naganarasimha]

Except Jenkins warnings, the patch mostly looks good to me. Some comments
{code:java}
AttributeValue#compare(AttributeValue other, AttributeExpressionOperation op)

{code}
this API compares a value with another by an operator. But for IN/NOT_IN, it is 
more like a set based comparison. That says for a given attribute value, we 
need to check if it is in (or not in) a set of strings. In this case, this API 
doesn't seem to be appropriate.

*NodeAttributePBImpl*

It seems you have included the changes from YARN-7892, have you checked the 
comments I left on that one, see this comment? Basically I think we should have 
a more restrictive equals implementation to avoid confusions.

*NodeAttributesManagerImpl*
 # We need an UT for #validate, but I am OK to track this in a lower priority 
JIRA.
 # NodeLabelUtil.checkAndThrowLabelName(attribute.getAttributePrefix());  — 
This pattern doesn't seem to allow DNS format of prefixes, could you please 
double check.

Other seems good to me.

Thanks for the updates.

 


was (Author: cheersyang):
Hi [~Naganarasimha]

Except Jenkins warnings, the patch mostly looks good to me. Some comments
{code:java}
AttributeValue#compare(AttributeValue other, AttributeExpressionOperation op)

{code}
this API compares a value with another by an operator. But for IN/NOT_IN, it is 
more like a set based comparison. That says for a given attribute value, we 
need to check if it is in (or not in) a set of strings. In this case, this API 
doesn't seem to be appropriate.

*NodeAttributePBImpl*

It seems you have included the changes from YARN-7892, have you checked the 
comments I left on that one, see this comment? Basically I think we should have 
a more restrictive equals implementation to avoid confusions.

*NodeAttributesManagerImpl*
 # We need an UT for #validate, but I am OK to track this in a lower priority 
JIRA.
 # NodeLabelUtil.checkAndThrowLabelName(attribute.getAttributePrefix());  — 
This pattern doesn't seem to allow DNS format of prefixes, could you please 
double check.
 # You still used {{Host}} abstraction, I am assuming you have tried and it 
doesn't work if we read info, for example, just from RMNode.

Other seems good to me.

Thanks for the updates.

 

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-11 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360222#comment-16360222
 ] 

Weiwei Yang edited comment on YARN-6858 at 2/12/18 2:43 AM:


Hi [~Naganarasimha]

Except Jenkins warnings, the patch mostly looks good to me. Some comments
{code:java}
AttributeValue#compare(AttributeValue other, AttributeExpressionOperation op)

{code}
this API compares a value with another by an operator. But for IN/NOT_IN, it is 
more like a set based comparison. That says for a given attribute value, we 
need to check if it is in (or not in) a set of strings. In this case, this API 
doesn't seem to be appropriate.

*NodeAttributePBImpl*

It seems you have included the changes from YARN-7892, have you checked the 
comments I left on that one, see this comment? Basically I think we should have 
a more restrictive equals implementation to avoid confusions.

*NodeAttributesManagerImpl*
 # We need an UT for #validate, but I am OK to track this in a lower priority 
JIRA.
 # NodeLabelUtil.checkAndThrowLabelName(attribute.getAttributePrefix());  — 
This pattern doesn't seem to allow DNS format of prefixes, could you please 
double check.
 # You still used {{Host}} abstraction, I am assuming you have tried and it 
doesn't work if we read info, for example, just from RMNode.

Other seems good to me.

Thanks for the updates.

 


was (Author: cheersyang):
Hi [~Naganarasimha]

Except Jenkins warnings, the patch mostly looks good to me. Some comments

{code}

AttributeValue#compare(AttributeValue other, AttributeExpressionOperation op)

{code}

this API compares a value with another by an operator. But for IN/NOT_IN, it is 
more like a set based comparison. That says for a given attribute value, we 
need to check if it is in (or not in) a set of strings. In this case, this API 
doesn't seem to be appropriate.

*NodeAttributePBImpl*

It seems you have included the changes from YARN-7892, have you checked the 
comments I left on that one, see this comment? Basically I think we should have 
a more restrictive equals implementation to avoid confusions.

*NodeAttributesManagerImpl*
 # We need an UT for #validate, but I am OK to track this in a lower priority 
JIRA.
 # NodeLabelUtil.checkAndThrowLabelName(attribute.getAttributePrefix());  — 
This pattern doesn't seem to allow DNS format of prefixes, could you please 
double check.
 # You still used \{{Host}} abstraction, I am assuming you have tried and it 
doesn't work if we read info simply from, for example, just from RMNode.

Other seems good to me.

Thanks for the updates.

 

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-11 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360222#comment-16360222
 ] 

Weiwei Yang commented on YARN-6858:
---

Hi [~Naganarasimha]

Except Jenkins warnings, the patch mostly looks good to me. Some comments

{code}

AttributeValue#compare(AttributeValue other, AttributeExpressionOperation op)

{code}

this API compares a value with another by an operator. But for IN/NOT_IN, it is 
more like a set based comparison. That says for a given attribute value, we 
need to check if it is in (or not in) a set of strings. In this case, this API 
doesn't seem to be appropriate.

*NodeAttributePBImpl*

It seems you have included the changes from YARN-7892, have you checked the 
comments I left on that one, see this comment? Basically I think we should have 
a more restrictive equals implementation to avoid confusions.

*NodeAttributesManagerImpl*
 # We need an UT for #validate, but I am OK to track this in a lower priority 
JIRA.
 # NodeLabelUtil.checkAndThrowLabelName(attribute.getAttributePrefix());  — 
This pattern doesn't seem to allow DNS format of prefixes, could you please 
double check.
 # You still used \{{Host}} abstraction, I am assuming you have tried and it 
doesn't work if we read info simply from, for example, just from RMNode.

Other seems good to me.

Thanks for the updates.

 

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5848) Remove unnecessary public/crossdomain.xml from YARN UIv2 sub project

2018-02-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5848:
-
Summary: Remove unnecessary public/crossdomain.xml from YARN UIv2 sub 
project  (was: public/crossdomain.xml is problematic)

> Remove unnecessary public/crossdomain.xml from YARN UIv2 sub project
> 
>
> Key: YARN-5848
> URL: https://issues.apache.org/jira/browse/YARN-5848
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha2, 3.1.0
>Reporter: Allen Wittenauer
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-5848.001.patch
>
>
> crossdomain.xml should really have an ASF header in it and be in the src 
> directory somewhere.  There's zero reason for it to have RAT exception given 
> that comments are possible in xml files.  It's also not in a standard maven 
> location, which should really be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7906) Fix mvn site fails with error: Multiple sources of package comments found for package "o.a.h.y.client.api.impl"

2018-02-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7906:
-
Summary: Fix mvn site fails with error: Multiple sources of package 
comments found for package "o.a.h.y.client.api.impl"  (was: mvn site fails)

> Fix mvn site fails with error: Multiple sources of package comments found for 
> package "o.a.h.y.client.api.impl"
> ---
>
> Key: YARN-7906
> URL: https://issues.apache.org/jira/browse/YARN-7906
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Attachments: YARN-7906.001.patch
>
>
> {{mvn site}} fails on trunk.
> {noformat}
> [ERROR] javadoc: warning - Multiple sources of package comments found for 
> package "org.apache.hadoop.yarn.client.api.impl"
> [ERROR] 
> /home/travis/build/aajisaka/hadoop-document/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/resource/package-info.java:21:
>  error: package org.apache.hadoop.yarn.api.resource has already been annotated
> [ERROR] @InterfaceAudience.Private
> [ERROR] ^
> [ERROR] java.lang.AssertionError
> [ERROR] at com.sun.tools.javac.util.Assert.error(Assert.java:126)
> [ERROR] at com.sun.tools.javac.util.Assert.check(Assert.java:45)
> [ERROR] at 
> com.sun.tools.javac.code.SymbolMetadata.setDeclarationAttributesWithCompletion(SymbolMetadata.java:161)
> [ERROR] at 
> com.sun.tools.javac.code.Symbol.setDeclarationAttributesWithCompletion(Symbol.java:215)
> [ERROR] at 
> com.sun.tools.javac.comp.MemberEnter.actualEnterAnnotations(MemberEnter.java:952)
> [ERROR] at 
> com.sun.tools.javac.comp.MemberEnter.access$600(MemberEnter.java:64)
> [ERROR] at com.sun.tools.javac.comp.MemberEnter$5.run(MemberEnter.java:876)
> [ERROR] at com.sun.tools.javac.comp.Annotate.flush(Annotate.java:143)
> [ERROR] at com.sun.tools.javac.comp.Annotate.enterDone(Annotate.java:129)
> [ERROR] at com.sun.tools.javac.comp.Enter.complete(Enter.java:512)
> [ERROR] at com.sun.tools.javac.comp.Enter.main(Enter.java:471)
> [ERROR] at com.sun.tools.javadoc.JavadocEnter.main(JavadocEnter.java:78)
> [ERROR] at 
> com.sun.tools.javadoc.JavadocTool.getRootDocImpl(JavadocTool.java:186)
> [ERROR] at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:346)
> [ERROR] at com.sun.tools.javadoc.Start.begin(Start.java:219)
> [ERROR] at com.sun.tools.javadoc.Start.begin(Start.java:205)
> [ERROR] at com.sun.tools.javadoc.Main.execute(Main.java:64)
> [ERROR] at com.sun.tools.javadoc.Main.main(Main.java:54)
> [ERROR] javadoc: error - fatal error
> {noformat}
> [https://travis-ci.org/aajisaka/hadoop-document/builds/338833122|https://travis-ci.org/aajisaka/hadoop-document/builds/338833122]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360175#comment-16360175
 ] 

genericqa commented on YARN-6858:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 48 new + 115 unchanged - 1 fixed = 163 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 2 new + 4183 unchanged - 0 fixed = 4185 total (was 4183) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
46s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the 

[jira] [Commented] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-11 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16360128#comment-16360128
 ] 

Naganarasimha G R commented on YARN-6858:
-

Thanks Weiwei and Bibin for comments ,
 For WEiwei's comments :
 * all comments except the 4th one of AttributeNodeLabelsManager are accepted 
and reworked upon, but not able to get 4th one may be you were referring to a 
older patch
 * AttributeNodeLabelsManagerImpl, 1,2,3,5 are accepted and 4 as discussed in 
the meeting have addressed by removing the config check
 * AttributeNodeLabelsManagerImpl 4th point , as per the discussion i am not 
checking for the node labels
 * AttributeNodeLabelsManagerImpl line 204: validateAndGetNodeToAttribMap , has 
been optimized to check for the cluster level node attribute type is not 
getting mismatche, hence was not able to extract it out effectively to be 
reused in other places like script based provider. But have made it read able 
by extracting it to a method.
 * AttributeNodeLabelsManagerImpl 7 & 8 will be discussed and handled in other 
jira.

For bibin's comments :
 * _List nodesToAttributes is better than_, with that 
intention itself had written the api in 001 patch . But somehow was not 
convinced to have api based on particular scenarios and end time while having a 
look was not able to understand why the api was not set or map but a list. Did 
not want to write the api for the needs of the proto communication, hence 
reverted back the api to the current syntax.
 * NodesToAttributesMappingRequestPBImpl could be still be used as it will be 
processed, only thing is that it needs to be recreated after processing based 
on this api. I think we can not avoid it like the labels to keep the interface 
clean.
 * whiteSpace error.in AttributeValue has been corrected.
 * AttributeNodeLabelsManagerImpl correct params--> done
 * NodeAttributesStoreEvent is under 
"org.apache.hadoop.yarn.server.resourcemanager.nodelabels", may be missed to 
understand your comment
 * _Map getAttributesForNode(String hostName)_ 
is already existing in NodeAttributesManager which has the similar 
functionality. I was not able to get a use case for fetching for all nodes, any 
reason why and also wanted to avoid manager's api tied up with dao's like 
NodeToAttributes. In any case i think its not a blocking issue as discussed 
will handle in another jira.
 * AdminService service related changes as discussed in the meeting planning to 
handle as part of CLI changes for interacting with Manager (2a listed in our 
doc) and for other wiring i think its simple and should be available as part of 
this jira hence handling it over here itself.

 

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6858) Attribute Manager to store and provide the attributes in RM

2018-02-11 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6858:

Attachment: YARN-6858-YARN-3409.003.patch

> Attribute Manager to store and provide the attributes in RM
> ---
>
> Key: YARN-6858
> URL: https://issues.apache.org/jira/browse/YARN-6858
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-6858-YARN-3409.001.patch, 
> YARN-6858-YARN-3409.002.patch, YARN-6858-YARN-3409.003.patch
>
>
> Similar to CommonNodeLabelsManager we need to have a centralized manager for 
> Node Attributes too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



Appmaster failed to launch container in alternate nodemanager after it connection timeout in one NM.

2018-02-11 Thread Khireswar Kalita
Dear friends,

Need some help to know root cause of this issue.

In a sqoop job failures, it hs been noticed that the app master wasn't able
to connect to a NM due to connection time out issues and it kept on
retrying the connection for close to 2 hrs, until killed manually.
The timeout was due a temporary network issue.

Here is overview of  what happend:

RM <-> NM01(hdpn01)  Network ok
RM <-> NM08(hdpn08)  Network ok
NM01 <---X---> NM08 Network failed

AppMaster container launched at NM01 node.

Here is brief log:

INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before
Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0
AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0
HostLocal:0 RackLocal:0
2018-02-03 21:12:51,734 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources()
for application_1517675224254_1052: ask=1 release= 0 newContainers=0
finishedContainers=0 resourcelimit= knownNMs=24
2018-02-03 21:12:52,751 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated
containers 1
2018-02-03 21:12:52,793 INFO [RMCommunicator Allocator]
org.apache.hadoop.yarn.util.RackResolver: Resolved hdpn08.ztpl.net to
/default-rack
2018-02-03 21:12:52,797 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned
container container_1517675224254_1052_02_02 to
attempt_1517675224254_1052_m_00_1000
2018-02-03 21:12:52,799 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After
Scheduling: PendingReds:0 ScheduledMaps:0 Sc

..

2018-02-03 22:43:58,911 WARN [ContainerLauncher #0]
org.apache.hadoop.ipc.Client: Failed to connect to server:
hdpn08.ztpl.net/172.20.1.108:45454: retries get failed due to exceeded
maximum allowed retries number: 0
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)


Why did AM keep retrying the connection to the NM on hdpn08 for 2 hrs. till
the time it was manually killed? If it wasn’t killed it would have
continued for much longer.

Why did AM not stop trying after x number of tries? Is there any max
attempt properties for application master?

Why did AM not spin out another map task to compensate for this problematic
task?



Thanks
Khireswar Kalita


[jira] [Commented] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-11 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16359871#comment-16359871
 ] 

Weiwei Yang commented on YARN-7921:
---

[~asuresh], [~kkaranasos], please let me know your thoughts on this proposal. 
Thanks

> Transform a PlacementConstraint to a string expression
> --
>
> Key: YARN-7921
> URL: https://issues.apache.org/jira/browse/YARN-7921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
>
> Purpose:
> Let placement constraint viewable on UI or log, e.g print app placement 
> constraint in RM app page. Help user to use constraints and analysis 
> placement issues easier.
> Propose:
> Like what was added for DS, toString is a reversed process of 
> {{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
> string, using same syntax. E.g
> {code}
> AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));
> constraint.toString();
> // This prints: IN,NODE,hbase-m
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7921) Transform a PlacementConstraint to a string expression

2018-02-11 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-7921:
-

 Summary: Transform a PlacementConstraint to a string expression
 Key: YARN-7921
 URL: https://issues.apache.org/jira/browse/YARN-7921
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Purpose:

Let placement constraint viewable on UI or log, e.g print app placement 
constraint in RM app page. Help user to use constraints and analysis placement 
issues easier.

Propose:

Like what was added for DS, toString is a reversed process of 
{{PlacementConstraintParser}} that transforms a PlacementConstraint to a 
string, using same syntax. E.g

{code}

AbstractConstraint constraintExpr = targetIn(NODE, allocationTag("hbase-m"));

constraint.toString();

// This prints: IN,NODE,hbase-m

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7838) Support AND/OR constraints in Distributed Shell

2018-02-11 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7838:
-
Fix Version/s: 3.2.0

> Support AND/OR constraints in Distributed Shell
> ---
>
> Key: YARN-7838
> URL: https://issues.apache.org/jira/browse/YARN-7838
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: distributed-shell
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Critical
>  Labels: constraints
> Fix For: 3.2.0
>
> Attachments: YARN-7838.001.patch, YARN-7838.002.patch, 
> YARN-7838.003.patch, YARN-7838.prelim.patch
>
>
> Extending DS placement spec syntax to support AND/OR constraints, something 
> like
> {code}
> // simple
> -placement_spec foo=4,AND(NOTIN,NODE,foo:NOTIN,NODE,bar)
> // nested
> -placement_spec foo=4,AND(NOTIN,NODE,foo:OR(IN,NODE,moo:IN,NODE,bar))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org