[jira] [Updated] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8100:
---
Attachment: YARN-8100-YARN-3409.006.patch

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch, YARN-8100-YARN-3409.005.patch, 
> YARN-8100-YARN-3409.006.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430034#comment-16430034
 ] 

Bibin A Chundatt commented on YARN-8100:


Attached patch to handle checkstyle issues. Testcase failures looks unrelated.

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch, YARN-8100-YARN-3409.005.patch, 
> YARN-8100-YARN-3409.006.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template

2018-04-08 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-7574:
---
Attachment: YARN-7574.12.patch

> Add support for Node Labels on Auto Created Leaf Queue Template
> ---
>
> Key: YARN-7574
> URL: https://issues.apache.org/jira/browse/YARN-7574
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-7574.1.patch, YARN-7574.10.patch, 
> YARN-7574.11.patch, YARN-7574.12.patch, YARN-7574.2.patch, YARN-7574.3.patch, 
> YARN-7574.4.patch, YARN-7574.5.patch, YARN-7574.6.patch, YARN-7574.7.patch, 
> YARN-7574.8.patch, YARN-7574.9.patch
>
>
> YARN-7473 adds support for auto created leaf queues to inherit node labels 
> capacities from parent queues. Howebver there is no support for leaf queue 
> template to allow different configured capacities for different node labels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7598) Document how to use classpath isolation for aux-services in YARN

2018-04-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16430026#comment-16430026
 ] 

genericqa commented on YARN-7598:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 19 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7598 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918019/YARN-7598.3.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 90a762d54d74 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5700556 |
| maven | version: Apache Maven 3.3.9 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/20265/artifact/out/whitespace-tabs.txt
 |
| Max. process+thread count | 408 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20265/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document how to use classpath isolation for aux-services in YARN
> 
>
> Key: YARN-7598
> URL: https://issues.apache.org/jira/browse/YARN-7598
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Major
> Attachments: YARN-7598.2.patch, YARN-7598.3.patch, 
> YARN-7598.trunk.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8126) [Follow up] Support auto-spawning of admin configured services during bootstrap of rm

2018-04-08 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reassigned YARN-8126:
---

Assignee: Rohith Sharma K S

> [Follow up] Support auto-spawning of admin configured services during 
> bootstrap of rm
> -
>
> Key: YARN-8126
> URL: https://issues.apache.org/jira/browse/YARN-8126
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
>
> YARN-8048 adds support auto-spawning of admin configured services during 
> bootstrap of rm. 
> This JIRA is to follow up some of the comments discussed in YARN-8048. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8128) Document better the per-node per-app file limit in YARN log aggregation

2018-04-08 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-8128:

Attachment: YARN-8128.1.patch

> Document better the per-node per-app file limit in YARN log aggregation
> ---
>
> Key: YARN-8128
> URL: https://issues.apache.org/jira/browse/YARN-8128
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Major
> Attachments: YARN-8128.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8128) Document better the per-node per-app file limit in YARN log aggregation

2018-04-08 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-8128:
---

 Summary: Document better the per-node per-app file limit in YARN 
log aggregation
 Key: YARN-8128
 URL: https://issues.apache.org/jira/browse/YARN-8128
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7598) Document how to use classpath isolation for aux-services in YARN

2018-04-08 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-7598:

Attachment: YARN-7598.3.patch

> Document how to use classpath isolation for aux-services in YARN
> 
>
> Key: YARN-7598
> URL: https://issues.apache.org/jira/browse/YARN-7598
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Major
> Attachments: YARN-7598.2.patch, YARN-7598.3.patch, 
> YARN-7598.trunk.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8095) Allow disable non-exclusive allocation

2018-04-08 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429983#comment-16429983
 ] 

Weiwei Yang commented on YARN-8095:
---

In this case, when an app is submitted to {{root.label1}}, that looks like it 
should preempt resource from {{root.batch}} instead of {{root.longlived}} 
right? As {{root.batch}} uses more resource than guaranteed (50%) and it has 
same priority with root.label1. Unless there is some bug in preemption logic, 
is this correct?

> Allow disable non-exclusive allocation
> --
>
> Key: YARN-8095
> URL: https://issues.apache.org/jira/browse/YARN-8095
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.8.3
>Reporter: kyungwan nam
>Priority: Major
>
> We have 'longlived' Queue, which is used for long-lived apps.
> In situation where default Partition resources are not enough, containers for 
> long-lived app can be allocated to sharable Partition.
> Since then, containers for long-lived app can be easily preempted.
> We don’t want long-lived apps to be killed abruptly.
> Currently, non-exclusive allocation can happen regardless of whether the 
> queue is accessible to the sharable Partition.
> It would be good if non-exclusive allocation can be disabled at queue level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8095) Allow disable non-exclusive allocation

2018-04-08 Thread kyungwan nam (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429978#comment-16429978
 ] 

kyungwan nam commented on YARN-8095:


Hi. [~cheersyang]

{quote}
yarn.scheduler.capacity.root.batch.priority=0
yarn.scheduler.capacity.root.batch.capacity=50
yarn.scheduler.capacity.root.batch.maximum-capacity=100

yarn.scheduler.capacity.root.longlived.priority=1
yarn.scheduler.capacity.root.longlived.capacity=50
yarn.scheduler.capacity.root.longlived.maximum-capacity=50
{quote}

Therefore, my config is as follows.

||Queue||Priority||Capacity||Accessible-Partition||
|root.longlived|1 | 50~50   | default |
|root.batch|0 | 50~100  | default |
|root.label1|   0 | 0   | label1 |


> Allow disable non-exclusive allocation
> --
>
> Key: YARN-8095
> URL: https://issues.apache.org/jira/browse/YARN-8095
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.8.3
>Reporter: kyungwan nam
>Priority: Major
>
> We have 'longlived' Queue, which is used for long-lived apps.
> In situation where default Partition resources are not enough, containers for 
> long-lived app can be allocated to sharable Partition.
> Since then, containers for long-lived app can be easily preempted.
> We don’t want long-lived apps to be killed abruptly.
> Currently, non-exclusive allocation can happen regardless of whether the 
> queue is accessible to the sharable Partition.
> It would be good if non-exclusive allocation can be disabled at queue level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8127) Resource leak when async scheduling is enabled

2018-04-08 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8127:
--
Priority: Critical  (was: Blocker)

> Resource leak when async scheduling is enabled
> --
>
> Key: YARN-8127
> URL: https://issues.apache.org/jira/browse/YARN-8127
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Tao Yang
>Priority: Critical
>
> Brief steps to reproduce
>  # Enable async scheduling, 5 threads
>  # Submit a lot of jobs trying to exhaust cluster resource
>  # After a while, observed NM allocated resource is more than resource 
> requested by allocated containers
> Looks like the commit phase is not sync handling reserved containers, causing 
> some proposal incorrectly accepted, subsequently resource was deducted 
> multiple times for a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8127) Resource leak when async scheduling is enabled

2018-04-08 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-8127:
-

Assignee: Tao Yang

> Resource leak when async scheduling is enabled
> --
>
> Key: YARN-8127
> URL: https://issues.apache.org/jira/browse/YARN-8127
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Weiwei Yang
>Assignee: Tao Yang
>Priority: Blocker
>
> Brief steps to reproduce
>  # Enable async scheduling, 5 threads
>  # Submit a lot of jobs trying to exhaust cluster resource
>  # After a while, observed NM allocated resource is more than resource 
> requested by allocated containers
> Looks like the commit phase is not sync handling reserved containers, causing 
> some proposal incorrectly accepted, subsequently resource was deducted 
> multiple times for a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8127) Resource leak when async scheduling is enabled

2018-04-08 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-8127:
-

 Summary: Resource leak when async scheduling is enabled
 Key: YARN-8127
 URL: https://issues.apache.org/jira/browse/YARN-8127
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Weiwei Yang


Brief steps to reproduce
 # Enable async scheduling, 5 threads
 # Submit a lot of jobs trying to exhaust cluster resource
 # After a while, observed NM allocated resource is more than resource 
requested by allocated containers

Looks like the commit phase is not sync handling reserved containers, causing 
some proposal incorrectly accepted, subsequently resource was deducted multiple 
times for a container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8123) Skip compiling old hamlet package when the Java version is 10 or upper

2018-04-08 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429930#comment-16429930
 ] 

Takanobu Asanuma commented on YARN-8123:


Thanks for filing this issue [~ajisakaa], and thanks for your thought 
[~dineshchitlangia]. I think we should also include java9 at this stage.

> Skip compiling old hamlet package when the Java version is 10 or upper
> --
>
> Key: YARN-8123
> URL: https://issues.apache.org/jira/browse/YARN-8123
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
> Environment: Java 10 or upper
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> HADOOP-11423 skipped compiling old hamlet package when the Java version is 9, 
> however, it is not skipped with Java 10+. We need to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7892) Revisit NodeAttribute class structure

2018-04-08 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429924#comment-16429924
 ] 

Naganarasimha G R commented on YARN-7892:
-

[~sunilg] & [~bibinchundatt], 

As discussed in the meeting, have updated the Jira name and description with 
issues pointed out by [~bibinchundatt] & [~cheersyang]. Also have attached a 
WIP patch to get the early feedback and for the complete patch, planning to 
revisit the manager api and data structure in the manager. 

> Revisit NodeAttribute class structure
> -
>
> Key: YARN-7892
> URL: https://issues.apache.org/jira/browse/YARN-7892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7892-YARN-3409.001.patch, 
> YARN-7892-YARN-3409.002.patch, YARN-7892-YARN-3409.003.WIP.patch
>
>
> In the existing structure, we had kept the type and value along with the 
> attribute which would create confusion to the user to understand the APIs as 
> they would not be clear as to what needs to be sent for type and value while 
> fetching the mappings for node(s).
> As well as equals will not make sense when we compare only for prefix and 
> name where as values for them might be different.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7892) Revisit NodeAttribute class structure

2018-04-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7892:

Attachment: YARN-7892-YARN-3409.003.WIP.patch

> Revisit NodeAttribute class structure
> -
>
> Key: YARN-7892
> URL: https://issues.apache.org/jira/browse/YARN-7892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7892-YARN-3409.001.patch, 
> YARN-7892-YARN-3409.002.patch, YARN-7892-YARN-3409.003.WIP.patch
>
>
> In the existing structure, we had kept the type and value along with the 
> attribute which would create confusion to the user to understand the APIs as 
> they would not be clear as to what needs to be sent for type and value while 
> fetching the mappings for node(s).
> As well as equals will not make sense when we compare only for prefix and 
> name where as values for them might be different.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7892) Revisit NodeAttribute class structure

2018-04-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7892:

Description: 
In the existing structure, we had kept the type and value along with the 
attribute which would create confusion to the user to understand the APIs as 
they would not be clear as to what needs to be sent for type and value while 
fetching the mappings for node(s).

As well as equals will not make sense when we compare only for prefix and name 
where as values for them might be different.  

> Revisit NodeAttribute class structure
> -
>
> Key: YARN-7892
> URL: https://issues.apache.org/jira/browse/YARN-7892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7892-YARN-3409.001.patch, 
> YARN-7892-YARN-3409.002.patch
>
>
> In the existing structure, we had kept the type and value along with the 
> attribute which would create confusion to the user to understand the APIs as 
> they would not be clear as to what needs to be sent for type and value while 
> fetching the mappings for node(s).
> As well as equals will not make sense when we compare only for prefix and 
> name where as values for them might be different.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7892) Revisit NodeAttribute class structure

2018-04-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-7892:

Summary: Revisit NodeAttribute class structure  (was: NodeAttributePBImpl 
does not implement hashcode and Equals properly)

> Revisit NodeAttribute class structure
> -
>
> Key: YARN-7892
> URL: https://issues.apache.org/jira/browse/YARN-7892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>Priority: Major
> Attachments: YARN-7892-YARN-3409.001.patch, 
> YARN-7892-YARN-3409.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-1489) [Umbrella] Work-preserving ApplicationMaster restart

2018-04-08 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-1489.
---
Resolution: Fixed
  Assignee: (was: Vinod Kumar Vavilapalli)

Resolved this very old feature as fixed. Keeping it unassigned given multiple 
contributors. No fix-version given the tasks (perhaps?) spanned across releases.

> [Umbrella] Work-preserving ApplicationMaster restart
> 
>
> Key: YARN-1489
> URL: https://issues.apache.org/jira/browse/YARN-1489
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Priority: Major
> Attachments: Work preserving AM restart.pdf
>
>
> Today if AMs go down,
>  - RM kills all the containers of that ApplicationAttempt
>  - New ApplicationAttempt doesn't know where the previous containers are 
> running
>  - Old running containers don't know where the new AM is running.
> We need to fix this to enable work-preserving AM restart. The later two 
> potentially can be done at the app level, but it is good to have a common 
> solution for all apps where-ever possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5193) For long running services, aggregate logs when a container completes instead of when the app completes

2018-04-08 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong resolved YARN-5193.
-
Resolution: Won't Fix

> For long running services, aggregate logs when a container completes instead 
> of when the app completes
> --
>
> Key: YARN-5193
> URL: https://issues.apache.org/jira/browse/YARN-5193
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Priority: Major
>
> For a long running service, containers will typically not complete very 
> often. However, when a container completes - it would be useful to aggregate 
> the logs right then, instead of waiting for the app to complete.
> This will allow the command line log tool to lookup containers for an app 
> from the log file index itself, instead of having to go and talk to YARN. 
> Talking to YARN really only works if ATS is enabled, and YARN is configured 
> to publish container information to ATS (That may not always be the case - 
> since this can overload ATS quite fast).
> There's some added benefits like cleaning out local disk space early, instead 
> of waiting till the app completes. (There's probably a separate jira 
> somewhere about cleanup of container for long running services anyway)
> cc [~vinodkv], [~xgong]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5193) For long running services, aggregate logs when a container completes instead of when the app completes

2018-04-08 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429882#comment-16429882
 ] 

Xuan Gong commented on YARN-5193:
-

Using the new log format can solve this issue. 

> For long running services, aggregate logs when a container completes instead 
> of when the app completes
> --
>
> Key: YARN-5193
> URL: https://issues.apache.org/jira/browse/YARN-5193
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Priority: Major
>
> For a long running service, containers will typically not complete very 
> often. However, when a container completes - it would be useful to aggregate 
> the logs right then, instead of waiting for the app to complete.
> This will allow the command line log tool to lookup containers for an app 
> from the log file index itself, instead of having to go and talk to YARN. 
> Talking to YARN really only works if ATS is enabled, and YARN is configured 
> to publish container information to ATS (That may not always be the case - 
> since this can overload ATS quite fast).
> There's some added benefits like cleaning out local disk space early, instead 
> of waiting till the app completes. (There's probably a separate jira 
> somewhere about cleanup of container for long running services anyway)
> cc [~vinodkv], [~xgong]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429816#comment-16429816
 ] 

genericqa commented on YARN-8100:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
23s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
36s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
51s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3409 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
47s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 20s{color} | {color:orange} root: The patch generated 20 new + 471 unchanged 
- 3 fixed = 491 total (was 474) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
15s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m  
4s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 53s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 

[jira] [Commented] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429813#comment-16429813
 ] 

genericqa commented on YARN-8100:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 3s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
44s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
32s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3409 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
44s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 25m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 30s{color} | {color:orange} root: The patch generated 20 new + 470 unchanged 
- 3 fixed = 490 total (was 473) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 21s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
7s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m  
8s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 27m 
11s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m 24s{color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 

[jira] [Commented] (YARN-7984) Delete registry entries from ZK on ServiceClient stop and clean up stop/destroy behavior

2018-04-08 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429797#comment-16429797
 ] 

Billie Rinaldi commented on YARN-7984:
--

[~eyang], could you take another look at this patch when you get a chance?

> Delete registry entries from ZK on ServiceClient stop and clean up 
> stop/destroy behavior
> 
>
> Key: YARN-7984
> URL: https://issues.apache.org/jira/browse/YARN-7984
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7984.1.patch, YARN-7984.2.patch
>
>
> The service records written to the registry are removed by ServiceClient on a 
> destroy call, but not on a stop call. The service AM does have some code to 
> clean up the registry entries when component instances are stopped, but if 
> the AM is killed before it has a chance to perform the cleanup, these entries 
> will be left in ZooKeeper. It would be better to clean these up in the stop 
> call, so that RegistryDNS does not provide lookups for containers that don't 
> exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8017) Validate the application ID has been persisted to the service definition prior to use

2018-04-08 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-8017:


Assignee: Billie Rinaldi

> Validate the application ID has been persisted to the service definition 
> prior to use
> -
>
> Key: YARN-8017
> URL: https://issues.apache.org/jira/browse/YARN-8017
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Critical
>
> The service definition is persisted to disk prior to launching the 
> application. Once the application is launched, the service definition is 
> updated to include the application ID. If submit fails, the application ID is 
> never added to the previously persisted service definition.
> When this occurs, attempting to stop or destroy the application results in a 
> NPE while trying to get the application ID from the service definition, 
> making it impossible to clean up.
> {code:java}
> 2018-03-02 18:28:05,512 INFO 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil: Loading service 
> definition from 
> hdfs://y7001.yns.hortonworks.com:8020/user/hadoopuser/.yarn/services/skumpfcents/skumpfcents.json
> 2018-03-02 18:28:05,525 WARN 
> org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.api.records.ApplicationId.fromString(ApplicationId.java:111)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.getAppId(ServiceClient.java:1106)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionStop(ServiceClient.java:363)
>   at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:251)
>   at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8017) Validate the application ID has been persisted to the service definition prior to use

2018-04-08 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429796#comment-16429796
 ] 

Billie Rinaldi commented on YARN-8017:
--

I intend to fix this issue in the clean up of stop/destroy behavior for 
YARN-7984.

> Validate the application ID has been persisted to the service definition 
> prior to use
> -
>
> Key: YARN-8017
> URL: https://issues.apache.org/jira/browse/YARN-8017
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Billie Rinaldi
>Priority: Critical
>
> The service definition is persisted to disk prior to launching the 
> application. Once the application is launched, the service definition is 
> updated to include the application ID. If submit fails, the application ID is 
> never added to the previously persisted service definition.
> When this occurs, attempting to stop or destroy the application results in a 
> NPE while trying to get the application ID from the service definition, 
> making it impossible to clean up.
> {code:java}
> 2018-03-02 18:28:05,512 INFO 
> org.apache.hadoop.yarn.service.utils.ServiceApiUtil: Loading service 
> definition from 
> hdfs://y7001.yns.hortonworks.com:8020/user/hadoopuser/.yarn/services/skumpfcents/skumpfcents.json
> 2018-03-02 18:28:05,525 WARN 
> org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.api.records.ApplicationId.fromString(ApplicationId.java:111)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.getAppId(ServiceClient.java:1106)
>   at 
> org.apache.hadoop.yarn.service.client.ServiceClient.actionStop(ServiceClient.java:363)
>   at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:251)
>   at 
> org.apache.hadoop.yarn.service.webapp.ApiServer$4.run(ApiServer.java:243)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7984) Delete registry entries from ZK on ServiceClient stop and clean up stop/destroy behavior

2018-04-08 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7984:
-
Summary: Delete registry entries from ZK on ServiceClient stop and clean up 
stop/destroy behavior  (was: Delete registry entries from ZK on ServiceClient 
stop and clean up stop/destry behavior)

> Delete registry entries from ZK on ServiceClient stop and clean up 
> stop/destroy behavior
> 
>
> Key: YARN-7984
> URL: https://issues.apache.org/jira/browse/YARN-7984
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7984.1.patch, YARN-7984.2.patch
>
>
> The service records written to the registry are removed by ServiceClient on a 
> destroy call, but not on a stop call. The service AM does have some code to 
> clean up the registry entries when component instances are stopped, but if 
> the AM is killed before it has a chance to perform the cleanup, these entries 
> will be left in ZooKeeper. It would be better to clean these up in the stop 
> call, so that RegistryDNS does not provide lookups for containers that don't 
> exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7984) Delete registry entries from ZK on ServiceClient stop and clean up stop/destry behavior

2018-04-08 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-7984:
-
Summary: Delete registry entries from ZK on ServiceClient stop and clean up 
stop/destry behavior  (was: Delete registry entries from ZK on ServiceClient 
stop)

> Delete registry entries from ZK on ServiceClient stop and clean up 
> stop/destry behavior
> ---
>
> Key: YARN-7984
> URL: https://issues.apache.org/jira/browse/YARN-7984
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7984.1.patch, YARN-7984.2.patch
>
>
> The service records written to the registry are removed by ServiceClient on a 
> destroy call, but not on a stop call. The service AM does have some code to 
> clean up the registry entries when component instances are stopped, but if 
> the AM is killed before it has a chance to perform the cleanup, these entries 
> will be left in ZooKeeper. It would be better to clean these up in the stop 
> call, so that RegistryDNS does not provide lookups for containers that don't 
> exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7996) Allow user supplied Docker client configurations with YARN native services

2018-04-08 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429794#comment-16429794
 ] 

Billie Rinaldi commented on YARN-7996:
--

I tried out this patch and it works as described. The only suggestion I have is 
to improve the description of the docker_client_config field in the yaml and 
site doc to make it clear that it's a URI for file containing the docker client 
configuration.

> Allow user supplied Docker client configurations with YARN native services
> --
>
> Key: YARN-7996
> URL: https://issues.apache.org/jira/browse/YARN-7996
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Attachments: YARN-7996.001.patch, YARN-7996.002.patch, 
> YARN-7996.003.patch, YARN-7996.004.patch
>
>
> YARN-5428 added support to distributed shell for supplying a Docker client 
> configuration at application submission time. The auth tokens within the 
> client configuration are then used to pull images from private Docker 
> repositories/registries. Add the same support to the YARN Native Services 
> framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429711#comment-16429711
 ] 

Bibin A Chundatt commented on YARN-8100:


Updated the same 005.patch again.

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch, YARN-8100-YARN-3409.005.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8100:
---
Attachment: (was: YARN-8100-YARN-3409.005.patch)

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch, YARN-8100-YARN-3409.005.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8100:
---
Attachment: YARN-8100-YARN-3409.005.patch

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch, YARN-8100-YARN-3409.005.patch, 
> YARN-8100-YARN-3409.005.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429710#comment-16429710
 ] 

Bibin A Chundatt commented on YARN-8100:


Thank you [~Naganarasimha] for review

# Removed getAttributesToNodes() api.
# Handled all other review comments too.

Please do have a look at latest patch.

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch, YARN-8100-YARN-3409.005.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8100) Support API interface to query cluster attributes and attribute to nodes

2018-04-08 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8100:
---
Attachment: YARN-8100-YARN-3409.005.patch

> Support API interface to query cluster attributes and attribute to nodes
> 
>
> Key: YARN-8100
> URL: https://issues.apache.org/jira/browse/YARN-8100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8100-YARN-3409.001.patch, 
> YARN-8100-YARN-3409.002.patch, YARN-8100-YARN-3409.003.patch, 
> YARN-8100-YARN-3409.004.patch, YARN-8100-YARN-3409.005.patch
>
>
> Jira is to add api to queue cluster node attributes and Attributes to node 
> query 
> *YarnClient*
> {code}
> getAttributesToNodes()
> getAttributesToNodes(Set attribute)
> getClusterAttributes()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org