[jira] [Updated] (YARN-5689) Update native services REST API to use agentless docker provider

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5689:

Fix Version/s: yarn-native-services

> Update native services REST API to use agentless docker provider
> 
>
> Key: YARN-5689
> URL: https://issues.apache.org/jira/browse/YARN-5689
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5689-yarn-native-services.001.patch
>
>
> The initial version of the native services REST API uses the agent provider. 
> It should be converted to use the new docker provider instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5689) Update native services REST API to use agentless docker provider

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5689:

Attachment: YARN-5689-yarn-native-services.001.patch

> Update native services REST API to use agentless docker provider
> 
>
> Key: YARN-5689
> URL: https://issues.apache.org/jira/browse/YARN-5689
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
> Attachments: YARN-5689-yarn-native-services.001.patch
>
>
> The initial version of the native services REST API uses the agent provider. 
> It should be converted to use the new docker provider instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5689) Update native services REST API to use agentless docker provider

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-5689:
---

Assignee: Gour Saha  (was: Billie Rinaldi)

> Update native services REST API to use agentless docker provider
> 
>
> Key: YARN-5689
> URL: https://issues.apache.org/jira/browse/YARN-5689
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>
> The initial version of the native services REST API uses the agent provider. 
> It should be converted to use the new docker provider instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570899#comment-15570899
 ] 

Sangjin Lee commented on YARN-5699:
---

I'd like to know what would be made much easier with having the created time as 
entity info that is inconvenient or difficult with the explicit created time 
attribute. I get that it would be more symmetric with the finished time, but 
that alone is not a *strong* argument for replicating this info.

Since we're talking about container entities, this is probably the most number 
of entities in the storage. If we can help not add a column *per object* 
unnecessarily, I think it would be a good thing.

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570893#comment-15570893
 ] 

Sangjin Lee commented on YARN-5561:
---

bq. Couple of more doubts just to be in sync, Does translation layer is 
with-in-the-reader service or separate ? What is the format of report object? 
Is this report object is general or only yarn entities?

I was thinking of something like utility classes that can create and return 
specific report types. For example,
{code}
ApplicationAttemptReport getApplicationAttemptReport(TimelineEntity 
appAttemptEntity);
ContainerReport getContainerReport(TimelineEntity containerEntity);
{code}

These classes can be (and probably would need to be) in the yarn common module 
so any REST client can invoke it to get a translated object back.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5728) TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization timeout

2016-10-12 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-5728:

Attachment: YARN-5728.01.patch

Very simple patch to extend the timeout to 5 minutes.

> TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization timeout
> 
>
> Key: YARN-5728
> URL: https://issues.apache.org/jira/browse/YARN-5728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
> Attachments: YARN-5728.01.patch
>
>
> TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization is failing by 
> timeout.
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/192/testReport/junit/org.apache.hadoop.yarn.server/TestMiniYarnClusterNodeUtilization/testUpdateNodeUtilization/
> {noformat}
> java.lang.Exception: test timed out after 6 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
>   at com.sun.proxy.$Proxy85.nodeHeartbeat(Unknown Source)
>   at 
> org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization(TestMiniYarnClusterNodeUtilization.java:113)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5728) TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization timeout

2016-10-12 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created YARN-5728:
---

 Summary: 
TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization timeout
 Key: YARN-5728
 URL: https://issues.apache.org/jira/browse/YARN-5728
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Akira Ajisaka


TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization is failing by 
timeout.
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/192/testReport/junit/org.apache.hadoop.yarn.server/TestMiniYarnClusterNodeUtilization/testUpdateNodeUtilization/
{noformat}
java.lang.Exception: test timed out after 6 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy85.nodeHeartbeat(Unknown Source)
at 
org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization(TestMiniYarnClusterNodeUtilization.java:113)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5689) Update native services REST API to use agentless docker provider

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5689:

Summary: Update native services REST API to use agentless docker provider  
(was: Convert native services REST API to use agentless docker provider)

> Update native services REST API to use agentless docker provider
> 
>
> Key: YARN-5689
> URL: https://issues.apache.org/jira/browse/YARN-5689
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>
> The initial version of the native services REST API uses the agent provider. 
> It should be converted to use the new docker provider instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570865#comment-15570865
 ] 

Hadoop QA commented on YARN-5715:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 5s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
41s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 
new + 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 9s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api generated 
2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
|  |  new org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity() 
invokes inefficient new Long(long) constructor; use Long.valueOf(long) instead  
At TimelineEntity.java:Long(long) constructor; use Long.valueOf(long) instead  
At TimelineEntity.java:[line 149] |
|  |  new 
org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity(TimelineEntity)
 invokes inefficient new Long(long) constructor; use Long.valueOf(long) instead 
 At TimelineEntity.java:Long(long) constructor; use Long.valueOf(long) instead  
At TimelineEntity.java:[line 149] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832920/YARN-5715-YARN-5355.02.patch
 |
| JIRA Issue | YARN-5715 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 56585a87f27c 

[jira] [Updated] (YARN-5708) Implement APIs to get resource profiles from the RM

2016-10-12 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5708:

Attachment: YARN-5708-YARN-3926.003.patch

Thanks for the reviews [~leftnoteasy] and [~asuresh]!

{quote}
One thing I did notice when skimming over the patch is that we should probably 
have a more consistent way for implementing hashcode/equals and toString in our 
PB classes.
We have a bunch of places where the hashcode/equals/toString are implemented in 
the abstract class (for eg. ResourceRequest) and there are places where it is 
defined in the subclass (for eg. ResourceLocalizationRequestPBImpl).
I tend to prefer the latter since it is reverts to the proto/builders 
implementation, which is (can be) auto-generated and something I would trust to 
be correct. The former is hand-coded and error prone.
{quote}
Agreed. For the PBImpl classes, I've overriden hashCode to use the proto 
hashCode - let me know if I missed any.

bq.  Better to mark unstable for 
GetAllResourceProfilesRequest/GetAllResourceProfilesResponse/GetResourceProfileRequest/GetResourceProfileResponse?
Fixed.

bq. Mentioned by Arun, it might be better to add some javadocs to 
ProfileCapability. For example, I'm a little confused why the field 
getProfileCapabilityOverride has a suffix "Override".
My apologies for not providing this upfront. I've added documentation for most 
of the classes, let me know if I missed any.

bq. ProfileCapability#getProfile, update to getProfileName?
Fixed

bq. Trivial comment: ClientRMService#getResourceProfiles/getResourceProfile can 
be merged.
Fixed - refactored the code to use common functionality.

> Implement APIs to get resource profiles from the RM
> ---
>
> Key: YARN-5708
> URL: https://issues.apache.org/jira/browse/YARN-5708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5708-YARN-3926.001.patch, 
> YARN-5708-YARN-3926.002.patch, YARN-5708-YARN-3926.003.patch
>
>
> Implement a set of APIs to get the available resource profiles from the RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570810#comment-15570810
 ] 

Hadoop QA commented on YARN-5725:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 17 unchanged - 1 fixed = 18 total (was 18) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 53s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833036/YARN-5725.001.patch |
| JIRA Issue | YARN-5725 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9d52f490d7b 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 12d739a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13369/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13369/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13369/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-10-12 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5725:
-
Attachment: YARN-5725.001.patch

Merged with YARN-5726.

> Test uncaught exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when setting 
> IP and host
> 
>
> Key: YARN-5725
> URL: https://issues.apache.org/jira/browse/YARN-5725
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5725.000.patch, YARN-5725.001.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The issue is a warning but it prevents container monitor to continue
> 2016-10-12 14:38:23,280 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455)
> 2016-10-12 14:38:23,281 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(613)) - 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
>  is interrupted. Exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-10-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570738#comment-15570738
 ] 

Miklos Szegedi commented on YARN-5725:
--

On the checkstyle comment: @Override:5: Method length is 229 lines (max allowed 
is 150).
The method was already long. I am not convinced that we need to refactor it for 
such a small change.

TestQueuingContainerManager passes all tests locally.

> Test uncaught exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when setting 
> IP and host
> 
>
> Key: YARN-5725
> URL: https://issues.apache.org/jira/browse/YARN-5725
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5725.000.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The issue is a warning but it prevents container monitor to continue
> 2016-10-12 14:38:23,280 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455)
> 2016-10-12 14:38:23,281 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(613)) - 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
>  is interrupted. Exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-10-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570727#comment-15570727
 ] 

Miklos Szegedi commented on YARN-5725:
--

I will use YARN-5725 to track YARN-5726 as well. They belong to different 
features. That is why I separated YARN-5725 and YARN-5726.
The source of the null pointer is a mock object that only returns an empty 
container map. This is just a test issue AFAIK.
{code}
context = Mockito.mock(Context.class);
Mockito.doReturn(new ConcurrentSkipListMap())
.when(context).getContainers();
{code}

> Test uncaught exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when setting 
> IP and host
> 
>
> Key: YARN-5725
> URL: https://issues.apache.org/jira/browse/YARN-5725
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5725.000.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The issue is a warning but it prevents container monitor to continue
> 2016-10-12 14:38:23,280 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455)
> 2016-10-12 14:38:23,281 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(613)) - 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
>  is interrupted. Exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5726) Test exception in TestContainersMonitorResourceChange.testContainersResourceChange when trying to get NMTimelinePublisher

2016-10-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570720#comment-15570720
 ] 

Miklos Szegedi commented on YARN-5726:
--

All right, I will use YARN-5725 to track both issues. They belong to different 
features. That is why I separated YARN-5725 and YARN-5726.
The source of the null pointer is a mock object that only returns an empty 
container map. This is just a test issue AFAIK.
{code}
context = Mockito.mock(Context.class);
Mockito.doReturn(new ConcurrentSkipListMap())
.when(context).getContainers();
{code}

> Test exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when trying 
> to get NMTimelinePublisher
> -
>
> Key: YARN-5726
> URL: https://issues.apache.org/jira/browse/YARN-5726
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
> Attachments: YARN-5726.000.patch
>
>
> 2016-10-12 14:38:39,970 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:587)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570708#comment-15570708
 ] 

Rohith Sharma K S commented on YARN-5699:
-

bq. So I'm not sure whether we need to store it again as part of the entity 
info.
I agree with Li lu. Even I don't have strong opinion to add duplicate created 
time with different info key, since it is available in entity level. Though 
created time is same entity createdTime, adding duplicate createdTime with info 
key as YARN_CONTAINER_CREATED_TIME and YARN_CONTAINER_FINISHED_TIME makes more 
user facing to understand it.

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570688#comment-15570688
 ] 

Rohith Sharma K S commented on YARN-5561:
-

bq. Could not get the intention behind pulling it out to a separate daemon 
though. Can you elaborate on that ?
bq. could you elaborate more on the concrete use cases for the need of a 
separate daemon for now?
I meant say, Initial idea is combine YARN entity reader along with 
TimelineReaderWebService. In case folks do not agree then next proposal is  to 
pulling out as separate daemon. Even I am decline for pulling out from 
timelinereader service but if other folks have any concern for combining along 
with timelinereader then it is an option. 

bq. As we briefly discussed during the call the other day, what we need is a 
translation layer that can create a Report object out of the timeline entity. 
If we implement such a translation layer, would it satisfy this? 
Yes. Couple of more doubts just to be in sync, Does translation layer is 
with-in-the-reader service or separate ? What is the format of report object? 
Is this report object is general or only yarn entities?

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5720) Update document for "rmadmin -replaceLabelOnNode"

2016-10-12 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570652#comment-15570652
 ] 

Tao Jie commented on YARN-5720:
---

Attached picture of generated html, which is easier to review.

> Update document for "rmadmin -replaceLabelOnNode"
> -
>
> Key: YARN-5720
> URL: https://issues.apache.org/jira/browse/YARN-5720
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-5720.001.patch, YarnCommands.png, nodeLabel.png
>
>
> As mentioned in YARN-4855, document should be updated since commands has 
> changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5720) Update document for "rmadmin -replaceLabelOnNode"

2016-10-12 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-5720:
--
Attachment: YarnCommands.png
nodeLabel.png

> Update document for "rmadmin -replaceLabelOnNode"
> -
>
> Key: YARN-5720
> URL: https://issues.apache.org/jira/browse/YARN-5720
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-5720.001.patch, YarnCommands.png, nodeLabel.png
>
>
> As mentioned in YARN-4855, document should be updated since commands has 
> changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5720) Update document for "rmadmin -replaceLabelOnNode"

2016-10-12 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-5720:
--
Attachment: YarnCommands.html
NodeLabel.html
YARN-5720.001.patch

> Update document for "rmadmin -replaceLabelOnNode"
> -
>
> Key: YARN-5720
> URL: https://issues.apache.org/jira/browse/YARN-5720
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-5720.001.patch
>
>
> As mentioned in YARN-4855, document should be updated since commands has 
> changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5720) Update document for "rmadmin -replaceLabelOnNode"

2016-10-12 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-5720:
--
Attachment: (was: YARN-5720.001.patch)

> Update document for "rmadmin -replaceLabelOnNode"
> -
>
> Key: YARN-5720
> URL: https://issues.apache.org/jira/browse/YARN-5720
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-5720.001.patch
>
>
> As mentioned in YARN-4855, document should be updated since commands has 
> changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5720) Update document for "rmadmin -replaceLabelOnNode"

2016-10-12 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-5720:
--
Attachment: (was: YarnCommands.html)

> Update document for "rmadmin -replaceLabelOnNode"
> -
>
> Key: YARN-5720
> URL: https://issues.apache.org/jira/browse/YARN-5720
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-5720.001.patch
>
>
> As mentioned in YARN-4855, document should be updated since commands has 
> changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5720) Update document for "rmadmin -replaceLabelOnNode"

2016-10-12 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-5720:
--
Attachment: (was: NodeLabel.html)

> Update document for "rmadmin -replaceLabelOnNode"
> -
>
> Key: YARN-5720
> URL: https://issues.apache.org/jira/browse/YARN-5720
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-5720.001.patch
>
>
> As mentioned in YARN-4855, document should be updated since commands has 
> changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5697) Use CliParser to parse options in RMAdminCLI

2016-10-12 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570580#comment-15570580
 ] 

Tao Jie commented on YARN-5697:
---

Thank you [~Naganarasimha],
I tried more ideal logic in earlier patch, but failed in 
testcase:TestRMAdminCLI#directlyAccessNodeLabelStore:
{code}
// change the sequence of "-directlyAccessNodeLabelStore" and labels,
// should not matter
args =
new String[] { "-addToClusterNodeLabels",
"-directlyAccessNodeLabelStore", "x,y" };
assertEquals(0, rmAdminCLI.run(args));
assertTrue(dummyNodeLabelsManager.getClusterNodeLabelNames().containsAll(
ImmutableSet.of("x", "y")));
{code}
It seems that we don't care about the position of 
{{-directlyAccessNodeLabelStore}} in command line currently.
Although {{-directlyAccessNodeLabelStore}} is marked as deprecated, this option 
still leads to different code path currently:
{code}
if (directlyAccessNodeLabelStore) {
  getNodeLabelManagerInstance(getConf()).replaceLabelsOnNode(map);
} else {
  ResourceManagerAdministrationProtocol adminProtocol =
  createAdminProtocol();
  ReplaceLabelsOnNodeRequest request =
  ReplaceLabelsOnNodeRequest.newInstance(map);
  request.setFailOnUnknownNodes(failOnUnknownNodes);
  adminProtocol.replaceLabelsOnNode(request);
}
{code}
Should we just remove the logic about {{-directlyAccessNodeLabelStore}} in this 
patch?
To make it clear,
1,  We should restrict command line format ({{rmadmin -addToClusterNodeLabels 
-directlyAccessNodeLabelStore x,y}} will no longer be OK, also {{rmadmin 
-replaceLabelsOnNode -failOnUnknownNodes node1=label1}} should be {{rmadmin 
-replaceLabelsOnNode node1=label1 -failOnUnknownNodes}}).
2, We should remove code about  {{-directlyAccessNodeLabelStore}} in this patch.
3, We should modify document and remove  {{-directlyAccessNodeLabelStore}}.
Agree?
 

> Use CliParser to parse options in RMAdminCLI
> 
>
> Key: YARN-5697
> URL: https://issues.apache.org/jira/browse/YARN-5697
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
> Fix For: 2.8.0
>
> Attachments: YARN-5697.001.patch, YARN-5697.002.patch, 
> YARN-5697.003.patch
>
>
> As discussed in YARN-4855, it is better to use CliParser rather than args to 
> parse command line options in RMAdminCli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-10-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570498#comment-15570498
 ] 

Sangjin Lee commented on YARN-5667:
---

Took a closer look at the patch (mostly at the pom changes).

I prefer that we are 100% accurate in terms of including all direct references 
(classes referring to classes in a dependency directly) as dependencies. I run 
mvn dependency:analyze and ideally there should be no "used undeclared" 
dependencies. That's just goo hygiene. I do see quite a few used undeclared 
dependencies in the timelineservice, timelineservice-hbase and 
timelineservice-hbase-tests modules. Could you please look into them and 
restore them accurately?

In some more detail,
(timelineservice)
- yarn-api should not be removed (the entity API is all here)
- yarn-server-common should not be removed
- yarn-common should not be removed (e.g. YarnRPC)
- jersey-core should not be removed (javax.ws.rs package)

(timelineservice-hbase)
- should add the license
- yarn-api should be added (the entity API is all here)
- yarn-server-applicationhistoryserver should be added (GenericObjectMapper)
- hadoop-common should be added
- hadoop- annotations should be added
- hadoop-yarn-common should be added

(timelineservice-hbase-tests)
- similarly, we should not remove the dependencies here: they were carefully 
analyzed and added for direct dependencies
- also, we still need to add timelineservice (there is a dependency to that)


> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: New module structure.png, part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, 
> pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, 
> pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, 
> pt9.yarn5667.001.patch, yarn5667-001.tar.gz
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-10-12 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570493#comment-15570493
 ] 

Subru Krishnan commented on YARN-5325:
--

Thanks [~curino] for the updated patch. +1 from my side pending fix for 
checkstyle warnings.

A couple of nits:
  * I think it'll help to have a simple private method to assert expected 
_numConatiners_ for a _subCluster_ in {{TestLocalityMulticastAMRMProxyPolicy}}.
  * Thanks for the code comments for tests, found it useful. Can you please add 
for *testSplitBasedOnHeadroomAndWeights* also.
  * If possible can you update to use slf4j for logging throughout.

> Stateless ARMRMProxy policies implementation
> 
>
> Key: YARN-5325
> URL: https://issues.apache.org/jira/browse/YARN-5325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5325-YARN-2915.05.patch, 
> YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, 
> YARN-5325-YARN-2915.08.patch, YARN-5325-YARN-2915.09.patch, 
> YARN-5325-YARN-2915.10.patch, YARN-5325-YARN-2915.11.patch, 
> YARN-5325.01.patch, YARN-5325.02.patch, YARN-5325.03.patch, YARN-5325.04.patch
>
>
> This JIRA tracks policies in the AMRMProxy that decide how to forward 
> ResourceRequests, without maintaining substantial state across decissions 
> (e.g., broadcast).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5726) Test exception in TestContainersMonitorResourceChange.testContainersResourceChange when trying to get NMTimelinePublisher

2016-10-12 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570490#comment-15570490
 ] 

Naganarasimha G R commented on YARN-5726:
-

Hi [~miklos.szeg...@cloudera.com],
Looks like YARN-5725 and this are almost same reason that the container Id is 
not found, so could you handle it in a single jira and also check the root 
cause why the container id is not found ?  at first glance looks like container 
id on completion is getting removed from the monitor first and then in the 
NMContext, so not sure about the scenario when the container is not found and 
result in NPE.

> Test exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when trying 
> to get NMTimelinePublisher
> -
>
> Key: YARN-5726
> URL: https://issues.apache.org/jira/browse/YARN-5726
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
> Attachments: YARN-5726.000.patch
>
>
> 2016-10-12 14:38:39,970 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:587)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570387#comment-15570387
 ] 

Li Lu commented on YARN-5699:
-

OK, I don't have a strong preference on that. Once the data is available on 
entity level I think we're fine. The benefit for replicating it is not to make 
created time different to end time, but I'm fine with either ways. 

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-10-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570322#comment-15570322
 ] 

Sangjin Lee commented on YARN-5667:
---

Thanks for updating the patch. I'll take a closer look. We need to think about 
the sequence of some of the outstanding patches going in. Probably this should 
be done after the outstanding ones are committed?

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: New module structure.png, part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, 
> pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, 
> pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, 
> pt9.yarn5667.001.patch, yarn5667-001.tar.gz
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-10-12 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570321#comment-15570321
 ] 

Gour Saha commented on YARN-5610:
-

Thanks [~jianhe]

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch, 
> YARN-5610-yarn-native-services.003.patch, 
> YARN-5610-yarn-native-services.004.patch, 
> YARN-5610-yarn-native-services.005.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570302#comment-15570302
 ] 

Sangjin Lee commented on YARN-5699:
---

The created time just seems redundant. Is there something that cannot be done 
by the explicit created time attribute? Rohith said

bq. It is only for easier accessibility rather than going through event time 
stamps.

We're already setting it to the entity created time, so you would *not* be 
going through event time stamps for the created time. So I'm not sure whether 
we need to store it again as part of the entity info.

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5727) Improve YARN shared cache support for LinuxContainerExecutor

2016-10-12 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-5727:
---
Description: 
When running LinuxContainerExecutor in a secure mode 
({{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} set 
to {{false}}), all localized files are owned by the user that owns the 
container which localized the resource. This presents a problem for the shared 
cache when a YARN application requests a resource to be uploaded to the shared 
cache that has a non-public visibility. The shared cache uploader (running as 
the node manager user) does not have access to the localized files and can not 
compute the checksum of the file or upload it to the cache. The solution should 
ideally satisfy the following three requirements:
# Localized files should still be safe/secure. Other users that run containers 
should not be able to modify, or delete the publicly localized files of others.
# The node manager user should be able to access these files for the purpose of 
checksumming and uploading to the shared cache without being a privileged user.
# The solution should avoid making unnecessary copies of the localized files.


  was:
When running LinuxContainerExecutor in a secure mode 
({{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} set 
to {{false}}), all localized files are owned by the user that owns the 
container which localized the resource. This presents a problem for the shared 
cache when a YARN application requests a resource to be uploaded to the shared 
cache that has a non-public visibility. The shared cache uploader (running as 
the node manager user) does not have access to the localized files and can not 
compute the checksum of the file or upload it to the cache. In this document we 
will discuss various solutions to this problem, all of which should ideally 
satisfy the following three requirements:
# Localized files should still be safe/secure. Other users that run containers 
should not be able to modify, or delete the publicly localized files of others.
# The node manager user should be able to access these files for the purpose of 
checksumming and uploading to the shared cache without being a privileged user.
# The solution should avoid making unnecessary copies of the localized files.



> Improve YARN shared cache support for LinuxContainerExecutor
> 
>
> Key: YARN-5727
> URL: https://issues.apache.org/jira/browse/YARN-5727
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5727-Design-v1.pdf
>
>
> When running LinuxContainerExecutor in a secure mode 
> ({{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} set 
> to {{false}}), all localized files are owned by the user that owns the 
> container which localized the resource. This presents a problem for the 
> shared cache when a YARN application requests a resource to be uploaded to 
> the shared cache that has a non-public visibility. The shared cache uploader 
> (running as the node manager user) does not have access to the localized 
> files and can not compute the checksum of the file or upload it to the cache. 
> The solution should ideally satisfy the following three requirements:
> # Localized files should still be safe/secure. Other users that run 
> containers should not be able to modify, or delete the publicly localized 
> files of others.
> # The node manager user should be able to access these files for the purpose 
> of checksumming and uploading to the shared cache without being a privileged 
> user.
> # The solution should avoid making unnecessary copies of the localized files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570267#comment-15570267
 ] 

Li Lu commented on YARN-5699:
-

Thanks [~sjlee0]. This said I'm generally fine with Rohith's planning. End time 
looks belong to an entity but not just an event, too. For start time, it looks 
natural to have some queries like "list all the apps start in a time window and 
finished in a time window"? So if it's not too expensive, I think it's fine to 
also put it in info field? 

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5727) Improve YARN shared cache support for LinuxContainerExecutor

2016-10-12 Thread Chris Trezzo (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Trezzo updated YARN-5727:
---
Attachment: YARN-5727-Design-v1.pdf

V1 of a design doc posted. Please let me know your thoughts. Thanks!

> Improve YARN shared cache support for LinuxContainerExecutor
> 
>
> Key: YARN-5727
> URL: https://issues.apache.org/jira/browse/YARN-5727
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5727-Design-v1.pdf
>
>
> When running LinuxContainerExecutor in a secure mode 
> ({{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} set 
> to {{false}}), all localized files are owned by the user that owns the 
> container which localized the resource. This presents a problem for the 
> shared cache when a YARN application requests a resource to be uploaded to 
> the shared cache that has a non-public visibility. The shared cache uploader 
> (running as the node manager user) does not have access to the localized 
> files and can not compute the checksum of the file or upload it to the cache. 
> In this document we will discuss various solutions to this problem, all of 
> which should ideally satisfy the following three requirements:
> # Localized files should still be safe/secure. Other users that run 
> containers should not be able to modify, or delete the publicly localized 
> files of others.
> # The node manager user should be able to access these files for the purpose 
> of checksumming and uploading to the shared cache without being a privileged 
> user.
> # The solution should avoid making unnecessary copies of the localized files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5727) Improve YARN shared cache support for LinuxContainerExecutor

2016-10-12 Thread Chris Trezzo (JIRA)
Chris Trezzo created YARN-5727:
--

 Summary: Improve YARN shared cache support for 
LinuxContainerExecutor
 Key: YARN-5727
 URL: https://issues.apache.org/jira/browse/YARN-5727
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chris Trezzo
Assignee: Chris Trezzo


When running LinuxContainerExecutor in a secure mode 
({{yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users}} set 
to {{false}}), all localized files are owned by the user that owns the 
container which localized the resource. This presents a problem for the shared 
cache when a YARN application requests a resource to be uploaded to the shared 
cache that has a non-public visibility. The shared cache uploader (running as 
the node manager user) does not have access to the localized files and can not 
compute the checksum of the file or upload it to the cache. In this document we 
will discuss various solutions to this problem, all of which should ideally 
satisfy the following three requirements:
# Localized files should still be safe/secure. Other users that run containers 
should not be able to modify, or delete the publicly localized files of others.
# The node manager user should be able to access these files for the purpose of 
checksumming and uploading to the shared cache without being a privileged user.
# The solution should avoid making unnecessary copies of the localized files.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570224#comment-15570224
 ] 

Sangjin Lee commented on YARN-5699:
---

Thanks for the clarification. Then what do you think of the currently proposed 
set of items in the patch? Do you think they all belong at the entity level? I 
think one thing we were debating was the diagnostic info.

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5726) Test exception in TestContainersMonitorResourceChange.testContainersResourceChange when trying to get NMTimelinePublisher

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570207#comment-15570207
 ] 

Hadoop QA commented on YARN-5726:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 17 unchanged - 1 fixed = 18 total (was 18) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 1s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832994/YARN-5726.000.patch |
| JIRA Issue | YARN-5726 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 34eb233fafac 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 12d739a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13367/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13367/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13367/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Test exception in 
> 

[jira] [Updated] (YARN-5726) Test exception in TestContainersMonitorResourceChange.testContainersResourceChange when trying to get NMTimelinePublisher

2016-10-12 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5726:
-
Attachment: YARN-5726.000.patch

Updated the code to do a basic null pointer check

> Test exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when trying 
> to get NMTimelinePublisher
> -
>
> Key: YARN-5726
> URL: https://issues.apache.org/jira/browse/YARN-5726
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
> Attachments: YARN-5726.000.patch
>
>
> 2016-10-12 14:38:39,970 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:587)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5726) Test exception in TestContainersMonitorResourceChange.testContainersResourceChange when trying to get NMTimelinePublisher

2016-10-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570098#comment-15570098
 ] 

Miklos Szegedi commented on YARN-5726:
--

The code in question appeared in YARN-4356. Ensure the timeline service v.2 is 
disabled cleanly and has no impact when it's turned off. Contributed by Sangjin 
Lee.

> Test exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when trying 
> to get NMTimelinePublisher
> -
>
> Key: YARN-5726
> URL: https://issues.apache.org/jira/browse/YARN-5726
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
>
> 2016-10-12 14:38:39,970 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:587)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570089#comment-15570089
 ] 

Hadoop QA commented on YARN-5725:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 17 unchanged - 1 fixed = 18 total (was 18) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 2s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832986/YARN-5725.000.patch |
| JIRA Issue | YARN-5725 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9006ea44c019 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 85cd06f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13366/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13366/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13366/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-5256) Add REST endpoint to support detailed NodeLabel Informations

2016-10-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570063#comment-15570063
 ] 

Wangda Tan commented on YARN-5256:
--

[~Naganarasimha], 

bq. would be good to have both the labelName and associated nodes info as 
params instead of path variable (similar to current patch to fetch information 
for set of labels)
I'm OK with both approach.

bq. Would be need to return the used resource info ? we need to fetch from 
scheduler ? i presume not required every time right? any plans in UI?
It will be used by UI sooner or later, so I'm OK with this JIRA doesn't include 
the used info. We can add it in a separated JIRA.

Thanks,

> Add REST endpoint to support detailed NodeLabel Informations
> 
>
> Key: YARN-5256
> URL: https://issues.apache.org/jira/browse/YARN-5256
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5256-YARN-3368.1.patch, 
> YARN-5256-YARN-3368.2.patch, YARN-5256.0001.patch, YARN-5256.0002.patch, 
> YARN-5256.0003.patch
>
>
> Add a new REST endpoint to fetch few more detailed information about node 
> labels such as resource, list of nodes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5708) Implement APIs to get resource profiles from the RM

2016-10-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15570044#comment-15570044
 ] 

Wangda Tan commented on YARN-5708:
--

Thanks [~vvasudev], some comments:

- Better to mark unstable for 
GetAllResourceProfilesRequest/GetAllResourceProfilesResponse/GetResourceProfileRequest/GetResourceProfileResponse?
- Mentioned by Arun, it might be better to add some javadocs to 
{{ProfileCapability}}. For example, I'm a little confused why the field 
{{getProfileCapabilityOverride}} has a suffix "Override".
- {{ProfileCapability#getProfile}}, update to {{getProfileName}}?
- Trivial comment: ClientRMService#getResourceProfiles/getResourceProfile can 
be merged.

> Implement APIs to get resource profiles from the RM
> ---
>
> Key: YARN-5708
> URL: https://issues.apache.org/jira/browse/YARN-5708
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5708-YARN-3926.001.patch, 
> YARN-5708-YARN-3926.002.patch
>
>
> Implement a set of APIs to get the available resource profiles from the RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5726) Test exception in TestContainersMonitorResourceChange.testContainersResourceChange when trying to get NMTimelinePublisher

2016-10-12 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-5726:


 Summary: Test exception in 
TestContainersMonitorResourceChange.testContainersResourceChange when trying to 
get NMTimelinePublisher
 Key: YARN-5726
 URL: https://issues.apache.org/jira/browse/YARN-5726
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi
Priority: Trivial


2016-10-12 14:38:39,970 WARN  [Container Monitor] monitor.ContainersMonitorImpl 
(ContainersMonitorImpl.java:run(594)) - Uncaught exception in 
ContainersMonitorImpl while monitoring resource of 
container_123456_0001_01_01
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:587)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-10-12 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5725:
-
Attachment: YARN-5725.000.patch

Updated the code to do a basic null pointer check

> Test uncaught exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when setting 
> IP and host
> 
>
> Key: YARN-5725
> URL: https://issues.apache.org/jira/browse/YARN-5725
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5725.000.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The issue is a warning but it prevents container monitor to continue
> 2016-10-12 14:38:23,280 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455)
> 2016-10-12 14:38:23,281 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(613)) - 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
>  is interrupted. Exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569980#comment-15569980
 ] 

Li Lu commented on YARN-5699:
-

bq. Are you proposing elevating all event info items to the entity info unless 
they really belong only to events?

This is my intention. If the event carries information about the timeline 
entity, why not posting it as an entity level info? 

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569971#comment-15569971
 ] 

Sangjin Lee commented on YARN-5699:
---

bq. What about end time? End time has to be get from event time stamp. But 
anyway, I have added a comment for streamlining YARN entities publishing, I 
think we should decide for providing YARN entities converter rather than fixing 
individual many bugs.

Created time is a first-class attribute of an entity, so it would be good to 
use that instead of a generic entity info field. End time is not a first-class 
attribute, so entity info would be a natural place.

{quote}
Actually let's take one step backward... What is the key benefit for keeping 
most data in event info, rather than in entity info? To me, storing data in 
entity info map looks a good representation that the data "belongs to" the 
whole entity, but not just one event. Even though some data may come from a 
certain type of timeline event, if it adds useful information to the whole 
entity, we should put them in entity info field. I'm not sure if there are some 
implementation side concerns that prevent us from doing this. However, even if 
so, I believe we should fix those blockers but not solve the problem the other 
way around (avoid putting data in entity info, but just put them in event 
info). I would de-prioritize the work on making the v2 code path consistent 
with v1's if this is a problem for our current design.
{quote}

I'm not so sure if I understand what you meant by this. Are you proposing 
elevating all event info items to the entity info unless they really belong 
only to events? Or are you proposing not to do this work? Bit confused...

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-10-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569966#comment-15569966
 ] 

Miklos Szegedi commented on YARN-5725:
--

The code to be changed was introduced by this change:
YARN-5430. Return container's ip and host from NM ContainerStatus call.

> Test uncaught exception in 
> TestContainersMonitorResourceChange.testContainersResourceChange when setting 
> IP and host
> 
>
> Key: YARN-5725
> URL: https://issues.apache.org/jira/browse/YARN-5725
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The issue is a warning but it prevents container monitor to continue
> 2016-10-12 14:38:23,280 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(594)) - 
> Uncaught exception in ContainersMonitorImpl while monitoring resource of 
> container_123456_0001_01_01
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455)
> 2016-10-12 14:38:23,281 WARN  [Container Monitor] 
> monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(613)) - 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
>  is interrupted. Exiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5725) Test uncaught exception in TestContainersMonitorResourceChange.testContainersResourceChange when setting IP and host

2016-10-12 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-5725:


 Summary: Test uncaught exception in 
TestContainersMonitorResourceChange.testContainersResourceChange when setting 
IP and host
 Key: YARN-5725
 URL: https://issues.apache.org/jira/browse/YARN-5725
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi
Priority: Minor


The issue is a warning but it prevents container monitor to continue
2016-10-12 14:38:23,280 WARN  [Container Monitor] monitor.ContainersMonitorImpl 
(ContainersMonitorImpl.java:run(594)) - Uncaught exception in 
ContainersMonitorImpl while monitoring resource of 
container_123456_0001_01_01
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:455)
2016-10-12 14:38:23,281 WARN  [Container Monitor] monitor.ContainersMonitorImpl 
(ContainersMonitorImpl.java:run(613)) - 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
 is interrupted. Exiting.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5724) [Umbrella] Better Queue Management in YARN

2016-10-12 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569933#comment-15569933
 ] 

Naganarasimha G R commented on YARN-5724:
-

It's more specific to CS right? 

> [Umbrella] Better Queue Management in YARN
> --
>
> Key: YARN-5724
> URL: https://issues.apache.org/jira/browse/YARN-5724
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>
> This serves as an umbrella ticket for tasks related to better queue 
> management in YARN.
> Today's the only way to manage the queue is through admins editing 
> configuration files and then issuing a refresh command. This will bring many 
> inconveniences. For example, the users can not create / delete /modify their 
> own queues without talking to site level admins.
> Even in today's approach (configuration-based), we still have several places 
> needed to improve: 
> *  It is possible today to add or modify queues without restarting the RM, 
> via a CS refresh. But for deleting queue, we have to restart the 
> resourcemanager.
> * When a queue is STOPPED, resources allocated to the queue can be handled 
> better. Currently, they'll only be used if the other queues are setup to go 
> over their capacity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior

2016-10-12 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569902#comment-15569902
 ] 

Vrushali C commented on YARN-5718:
--

Thanks Junping, latest patch v2.1 looks good to me.

> TimelineClient (and other places in YARN) shouldn't over-write HDFS client 
> retry settings which could cause unexpected behavior
> ---
>
> Key: YARN-5718
> URL: https://issues.apache.org/jira/browse/YARN-5718
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch
>
>
> In one HA cluster, after NN failed over, we noticed that job is getting 
> failed as TimelineClient failed to retry connection to proper NN. This is 
> because we are overwrite hdfs client settings that hard code retry policy to 
> be enabled that conflict NN failed-over case - hdfs client should fail fast 
> so can retry on another NN.
> We shouldn't assume any retry policy for hdfs client at all places in YARN. 
> This should keep consistent with HDFS settings that has different retry 
> polices in different deployment case. Thus, we should clean up these hard 
> code settings in YARN, include: FileSystemTimelineWriter, 
> FileSystemRMStateStore and FileSystemNodeLabelsStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569880#comment-15569880
 ] 

Wangda Tan commented on YARN-5145:
--

Took a quick try for patch, on my local single node env, it reports
bq. XMLHttpRequest cannot load 
http://localhost:8188/ws/v1/applicationhistory/apps/application_1476304805106_0004/appattempts/appattempt_1476304805106_0004_01/containers.
 No 'Access-Control-Allow-Origin' header is present on the requested resource. 
Origin 'http://localhost:8088' is therefore not allowed access. 
It doesn't work after I run the {{corsproxy} as well. Do we have CRFS config 
for timeline server? If no, is there any solution that we can run yarn ui v2 
with timeline server enabled on the same node?

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-12 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569771#comment-15569771
 ] 

Li Lu commented on YARN-5699:
-

Actually let's take one step backward... What is the key benefit for keeping 
most data in event info, rather than in entity info? To me, storing data in 
entity info map looks a good representation that the data "belongs to" the 
whole entity, but not just one event. Even though some data may come from a 
certain type of timeline event, if it adds useful information to the whole 
entity, we should put them in entity info field. I'm not sure if there are some 
implementation side concerns that prevent us from doing this. However, even if 
so, I believe we should fix those blockers but not solve the problem the other 
way around (avoid putting data in entity info, but just put them in event 
info). I would de-prioritize the work on making the v2 code path consistent 
with v1's if this is a problem for our current design. 

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569737#comment-15569737
 ] 

Hadoop QA commented on YARN-5718:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 18s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832957/YARN-5718-v2.1.patch |
| JIRA Issue | YARN-5718 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 1e0d14075cd7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6476934 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13364/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 

[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569739#comment-15569739
 ] 

Hadoop QA commented on YARN-5600:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 378 unchanged - 6 fixed = 378 total (was 384) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 7s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832964/YARN-5600.002.patch |
| JIRA Issue | YARN-5600 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 56f439bb743f 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6476934 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13365/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 

[jira] [Commented] (YARN-5724) [Umbrella] Better Queue Management in YARN

2016-10-12 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569710#comment-15569710
 ] 

Xuan Gong commented on YARN-5724:
-

Will add a proposal soon.

> [Umbrella] Better Queue Management in YARN
> --
>
> Key: YARN-5724
> URL: https://issues.apache.org/jira/browse/YARN-5724
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>
> This serves as an umbrella ticket for tasks related to better queue 
> management in YARN.
> Today's the only way to manage the queue is through admins editing 
> configuration files and then issuing a refresh command. This will bring many 
> inconveniences. For example, the users can not create / delete /modify their 
> own queues without talking to site level admins.
> Even in today's approach (configuration-based), we still have several places 
> needed to improve: 
> *  It is possible today to add or modify queues without restarting the RM, 
> via a CS refresh. But for deleting queue, we have to restart the 
> resourcemanager.
> * When a queue is STOPPED, resources allocated to the queue can be handled 
> better. Currently, they'll only be used if the other queues are setup to go 
> over their capacity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5724) [Umbrella] Better Queue Management in YARN

2016-10-12 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-5724:
---

 Summary: [Umbrella] Better Queue Management in YARN
 Key: YARN-5724
 URL: https://issues.apache.org/jira/browse/YARN-5724
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Xuan Gong
Assignee: Xuan Gong


This serves as an umbrella ticket for tasks related to better queue management 
in YARN.

Today's the only way to manage the queue is through admins editing 
configuration files and then issuing a refresh command. This will bring many 
inconveniences. For example, the users can not create / delete /modify their 
own queues without talking to site level admins.

Even in today's approach (configuration-based), we still have several places 
needed to improve: 
*  It is possible today to add or modify queues without restarting the RM, via 
a CS refresh. But for deleting queue, we have to restart the resourcemanager.
* When a queue is STOPPED, resources allocated to the queue can be handled 
better. Currently, they'll only be used if the other queues are setup to go 
over their capacity.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-10-12 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5600:
-
Attachment: YARN-5600.002.patch

Fixed checkstyle issues.

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-10-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569608#comment-15569608
 ] 

Miklos Szegedi commented on YARN-5600:
--

TestQueuingContainerManager.testKillMultipleOpportunisticContainers: It passes 
locally most of the time but I have seen instances when it failed with this 
error.

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569595#comment-15569595
 ] 

Hadoop QA commented on YARN-5145:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 6 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 4m 57s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b17 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832960/YARN-5145-YARN-3368.02.patch
 |
| JIRA Issue | YARN-5145 |
| Optional Tests |  asflicense  |
| uname | Linux 7b56f517a87a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 1e47518 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13363/artifact/patchprocess/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13363/artifact/patchprocess/whitespace-tabs.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13363/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569594#comment-15569594
 ] 

Hadoop QA commented on YARN-5325:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 1s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed 
= 74 total (was 75) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832955/YARN-5325-YARN-2915.11.patch
 |
| JIRA Issue | YARN-5325 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  shellcheck  shelldocs  |
| uname | Linux cd1a9f4679a6 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 0bf6bbb |
| Default Java | 1.8.0_101 |
| shellcheck | v0.4.4 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13362/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13362/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 

[jira] [Commented] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior

2016-10-12 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569568#comment-15569568
 ] 

Junping Du commented on YARN-5718:
--

Thanks Vrushali for quick comments. I think compile error is a bit misleading 
but indeed an issue need to fix in TestFSRMStateStore (due to a stupid mistake 
in generating v2 patch). v2.1 should fix the issue.

> TimelineClient (and other places in YARN) shouldn't over-write HDFS client 
> retry settings which could cause unexpected behavior
> ---
>
> Key: YARN-5718
> URL: https://issues.apache.org/jira/browse/YARN-5718
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch
>
>
> In one HA cluster, after NN failed over, we noticed that job is getting 
> failed as TimelineClient failed to retry connection to proper NN. This is 
> because we are overwrite hdfs client settings that hard code retry policy to 
> be enabled that conflict NN failed-over case - hdfs client should fail fast 
> so can retry on another NN.
> We shouldn't assume any retry policy for hdfs client at all places in YARN. 
> This should keep consistent with HDFS settings that has different retry 
> polices in different deployment case. Thus, we should clean up these hard 
> code settings in YARN, include: FileSystemTimelineWriter, 
> FileSystemRMStateStore and FileSystemNodeLabelsStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-12 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5145:
--
Attachment: YARN-5145-YARN-3368.02.patch

Attaching an initial patch.

This will download RM address and Timeline address as per HADOOP-13628.

Some more tests are needed .I ll update progress here. [~lewuathe], if you are 
not planning on this, I ll try taking over this. 

[~leftnoteasy] and [~Sreenath]. pls help to check the approach. 

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior

2016-10-12 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-5718:
-
Attachment: YARN-5718-v2.1.patch

> TimelineClient (and other places in YARN) shouldn't over-write HDFS client 
> retry settings which could cause unexpected behavior
> ---
>
> Key: YARN-5718
> URL: https://issues.apache.org/jira/browse/YARN-5718
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch
>
>
> In one HA cluster, after NN failed over, we noticed that job is getting 
> failed as TimelineClient failed to retry connection to proper NN. This is 
> because we are overwrite hdfs client settings that hard code retry policy to 
> be enabled that conflict NN failed-over case - hdfs client should fail fast 
> so can retry on another NN.
> We shouldn't assume any retry policy for hdfs client at all places in YARN. 
> This should keep consistent with HDFS settings that has different retry 
> polices in different deployment case. Thus, we should clean up these hard 
> code settings in YARN, include: FileSystemTimelineWriter, 
> FileSystemRMStateStore and FileSystemNodeLabelsStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-12 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569553#comment-15569553
 ] 

Sangjin Lee commented on YARN-5561:
---

I recognize the pain point you're mentioning in terms of wanting the data in 
the form of {{ApplicationAttemptReport}} or {{ContainerReport}}. That said, 
adding a new daemon should not be done lightly. I am in the opinion that the 
bar above which a new component is added to the system should be high. I'm not 
sure if this meets that bar.

Also, please note that what's contained in the current REST output would likely 
to be a *superset* of {{*Report}}; in other words, there are things in the 
entity API that are not in the {{*Report}}. Things like the uid and the 
currently discussed entity id prefix come to mind. So we cannot simply replace 
the return type, or things will be crippled.

As we briefly discussed during the call the other day, what we need is a 
translation layer that can create a Report object out of the timeline entity. 
If we implement such a translation layer, would it satisfy this? That way, the 
client gets the full timeline entity information, and it can convert it into a 
report object fairly easily without writing much code.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-10-12 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5325:
---
Attachment: YARN-5325-YARN-2915.11.patch

> Stateless ARMRMProxy policies implementation
> 
>
> Key: YARN-5325
> URL: https://issues.apache.org/jira/browse/YARN-5325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5325-YARN-2915.05.patch, 
> YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, 
> YARN-5325-YARN-2915.08.patch, YARN-5325-YARN-2915.09.patch, 
> YARN-5325-YARN-2915.10.patch, YARN-5325-YARN-2915.11.patch, 
> YARN-5325.01.patch, YARN-5325.02.patch, YARN-5325.03.patch, YARN-5325.04.patch
>
>
> This JIRA tracks policies in the AMRMProxy that decide how to forward 
> ResourceRequests, without maintaining substantial state across decissions 
> (e.g., broadcast).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-12 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569539#comment-15569539
 ] 

Li Lu commented on YARN-5561:
-

I'm generally OK with option 1, as it's quite consistent with my original 
thought (see the comment on Aug. 30). The only thing I would like to mention 
here is, we gave up this idea before because we thought when web UI requesting 
data, it needs to contact two different endpoints. I'm not sure if with current 
web UI use cases this problem is alleviated. 

bq. This approach can be modified/enhanced to pulling out of ATSv2 to separate 
daemon service OR can be start new service in RM itself with application 
history.
I'd incline not to go this far as of now. Sure, architecturally we can keep 
this flexibility, but could you elaborate more on the concrete use cases for 
the need of a separate daemon for now? 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-10-12 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569542#comment-15569542
 ] 

Carlo Curino commented on YARN-5325:


Thanks again for the suggestions.

As per our offline discussion AllocationBookeeper cannot be reinit once and 
forall as the accumulation of weights it does in reinit depends on the 
(possibly changing) set of active subclusters. 

We do log at debug level if the SubClusterResolver throws, but I added a 
if(debug) to avoid costly string construction if we are not in debug mode per 
your advise. 

Using "continue" makes the code much more legible, thanks for the suggestion.

The createResourceRequest is invoked tens of times in 
TestLocalityMulticastAMRMProxyFederationPolicy (now renamed to drop the 
federation), so I would prefer to leave it as is. 

> Stateless ARMRMProxy policies implementation
> 
>
> Key: YARN-5325
> URL: https://issues.apache.org/jira/browse/YARN-5325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5325-YARN-2915.05.patch, 
> YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, 
> YARN-5325-YARN-2915.08.patch, YARN-5325-YARN-2915.09.patch, 
> YARN-5325-YARN-2915.10.patch, YARN-5325.01.patch, YARN-5325.02.patch, 
> YARN-5325.03.patch, YARN-5325.04.patch
>
>
> This JIRA tracks policies in the AMRMProxy that decide how to forward 
> ResourceRequests, without maintaining substantial state across decissions 
> (e.g., broadcast).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior

2016-10-12 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569537#comment-15569537
 ] 

Vrushali C commented on YARN-5718:
--

Thanks Junping, the updated patch looks good. 

Not sure what the compilation error is:
{code}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR]   where E is a type-variable:
E extends Object declared in method toSet(E...)
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java:[326,36]
 error: cannot find symbol
[INFO] 1 erro
{code}

Perhaps unrelated. 

> TimelineClient (and other places in YARN) shouldn't over-write HDFS client 
> retry settings which could cause unexpected behavior
> ---
>
> Key: YARN-5718
> URL: https://issues.apache.org/jira/browse/YARN-5718
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-5718-v2.patch, YARN-5718.patch
>
>
> In one HA cluster, after NN failed over, we noticed that job is getting 
> failed as TimelineClient failed to retry connection to proper NN. This is 
> because we are overwrite hdfs client settings that hard code retry policy to 
> be enabled that conflict NN failed-over case - hdfs client should fail fast 
> so can retry on another NN.
> We shouldn't assume any retry policy for hdfs client at all places in YARN. 
> This should keep consistent with HDFS settings that has different retry 
> polices in different deployment case. Thus, we should clean up these hard 
> code settings in YARN, include: FileSystemTimelineWriter, 
> FileSystemRMStateStore and FileSystemNodeLabelsStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-12 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569508#comment-15569508
 ] 

Karthik Kambatla commented on YARN-4597:


Thanks for working on this, [~asuresh]. The approach looks reasonable to me. 
The patch is pretty big; it might be easier to use Github PR or RB for a 
thorough review - especially for minor comments.

High-level comments on the patch, keeping YARN-1011 work in mind:
# Really like that we are getting rid of QueuingContainer* classes, letting 
both guaranteed/queued containers go through the same code path 
# In ContainerScheduler, I see there are two code paths leading to starting a 
container for when enough resources are available or not. Did you consider a 
single path where we queue containers directly and let another thread launch 
them? This thread could be triggered immediately on queuing a container, on 
completion of a container, and periodically for cases where we do resource 
oversubscription.
# The methods for killing containers as needed all seem to be hardcoded to only 
consider allocated resources. Can we abstract it out further to allow for 
passing either allocation or utilization based on whether oversubscription is 
enabled. 
# Relatively minor: resourcesToFreeUp is initialized to container allocation on 
the node. Shouldn't it be initialized to zero? May be I am missing something. 

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4493) move queue can make app don't belong to any queue

2016-10-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569482#comment-15569482
 ] 

Yufei Gu commented on YARN-4493:


Hi [~jiangyu1211], thanks for your information. The easy way to identify this 
issue would be to drop a debug jar to RM, and reproduce it. That might help us 
more. BTW, I don't think there is any maintenance release for 2.4. 

> move queue can make app don't belong to any queue
> -
>
> Key: YARN-4493
> URL: https://issues.apache.org/jira/browse/YARN-4493
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.4.0, 2.6.0, 2.7.1
>Reporter: jiangyu
>Assignee: Yufei Gu
>Priority: Minor
> Attachments: YARN-4493.001.patch, yarn-4493.patch.1
>
>
> When moving a running application to a different queue, the current implement 
> don't check if the app can run in the new queue before remove it from current 
> queue. So if the destination queue is full, the app will throw exception, and 
> don't belong to any queue.
> After that, the queue become orphane, can not schedule any resources. If you 
> kill the app,  the removeApp method in FSLeafQueue will throw 
> IllealStateException of "Given app to remove app does not exist in queue ..." 
> exception.   
> So i think we should check if the destination queue can run the app before 
> remove it from the current queue.  
> The patch is from our revision.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-12 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569415#comment-15569415
 ] 

Varun Saxena commented on YARN-5561:


Similar to option 1 in terms of having separate endpoints is what we had in 
AHS/ATSv1 too i.e. in the form of AHSWebServices. That will clearly delink 
APIs'. What do we name it though ? {{ws/v2/applicationhistory}} ?
We can further add things like serving container logs from this endpoint too. 
And this can act as one stop destination for fetching YARN specific history 
data.

Could not get the intention behind pulling it out to a separate daemon though. 
Can you elaborate on that ?

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569397#comment-15569397
 ] 

Hadoop QA commented on YARN-5610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
53s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 in yarn-native-services has 14 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api:
 The patch generated 8 new + 92 unchanged - 11 fixed = 100 total (was 103) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 9s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api
 generated 2 new + 117 unchanged - 2 fixed = 119 total (was 119) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 27s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832942/YARN-5610-yarn-native-services.005.patch
 |
| JIRA Issue | YARN-5610 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b1d4c2962778 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 7224bdb |
| Default Java | 1.8.0_101 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13361/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api.txt
 |
| findbugs | v3.0.0 |
| findbugs | 

[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-10-12 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569357#comment-15569357
 ] 

Gour Saha commented on YARN-5610:
-

It also aligns with the newly uploaded swagger spec uploaded to YARN-4793.

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch, 
> YARN-5610-yarn-native-services.003.patch, 
> YARN-5610-yarn-native-services.004.patch, 
> YARN-5610-yarn-native-services.005.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5675) Checkin swagger definition in the repo

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569339#comment-15569339
 ] 

Hadoop QA commented on YARN-5675:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
11s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832937/YARN-5675-yarn-native-services.001.patch
 |
| JIRA Issue | YARN-5675 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  |
| uname | Linux 7247fda26840 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 7224bdb |
| Default Java | 1.8.0_101 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13360/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13360/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13360/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/13360/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13360/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Checkin swagger definition in the repo
> --
>
> Key: YARN-5675
> URL: https://issues.apache.org/jira/browse/YARN-5675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> 

[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-10-12 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569335#comment-15569335
 ] 

Gour Saha commented on YARN-5610:
-

Thanks [~jianhe]. I have one additional patch with minor cosmetic changes which 
also resolves few checkstyle issues reported. Please review and commit if it 
looks good.

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch, 
> YARN-5610-yarn-native-services.003.patch, 
> YARN-5610-yarn-native-services.004.patch, 
> YARN-5610-yarn-native-services.005.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5610) Initial code for native services REST API

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5610:

Attachment: YARN-5610-yarn-native-services.005.patch

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch, 
> YARN-5610-yarn-native-services.003.patch, 
> YARN-5610-yarn-native-services.004.patch, 
> YARN-5610-yarn-native-services.005.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5610) Initial code for native services REST API

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reopened YARN-5610:
-

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch, 
> YARN-5610-yarn-native-services.003.patch, 
> YARN-5610-yarn-native-services.004.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5675) Checkin swagger definition in the repo

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5675:

Attachment: (was: YARN-5675-yarn-native-services.001.patch)

> Checkin swagger definition in the repo
> --
>
> Key: YARN-5675
> URL: https://issues.apache.org/jira/browse/YARN-5675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5675-yarn-native-services.001.patch
>
>
> This task will be used to submit the REST API swagger definition (yaml 
> format) to be checked in to the repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5675) Checkin swagger definition in the repo

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5675:

Attachment: YARN-5675-yarn-native-services.001.patch

> Checkin swagger definition in the repo
> --
>
> Key: YARN-5675
> URL: https://issues.apache.org/jira/browse/YARN-5675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5675-yarn-native-services.001.patch
>
>
> This task will be used to submit the REST API swagger definition (yaml 
> format) to be checked in to the repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5256) Add REST endpoint to support detailed NodeLabel Informations

2016-10-12 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569246#comment-15569246
 ] 

Sunil G commented on YARN-5256:
---

Adding to above discussion

bq.Would be need to return the used resource info ? we need to fetch from 
scheduler ? i presume not required every time right? any plans in UI ?
I think for UI, this ll be helpful.

> Add REST endpoint to support detailed NodeLabel Informations
> 
>
> Key: YARN-5256
> URL: https://issues.apache.org/jira/browse/YARN-5256
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5256-YARN-3368.1.patch, 
> YARN-5256-YARN-3368.2.patch, YARN-5256.0001.patch, YARN-5256.0002.patch, 
> YARN-5256.0003.patch
>
>
> Add a new REST endpoint to fetch few more detailed information about node 
> labels such as resource, list of nodes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5677) RM should transition to standby when connection is lost for an extended period

2016-10-12 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5677:
---
Fix Version/s: 3.0.0-alpha2

> RM should transition to standby when connection is lost for an extended period
> --
>
> Key: YARN-5677
> URL: https://issues.apache.org/jira/browse/YARN-5677
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5677.001.patch, YARN-5677.002.patch, 
> YARN-5677.003.patch, YARN-5677.004.patch, YARN-5677.005.patch
>
>
> In trunk, there is no maximum number of retries that I see.  It appears the 
> connection will be retried forever, with the active never figuring out it's 
> no longer active.  In my testing, the active-active state lasted almost 2 
> hours with no sign of stopping before I killed it.  The solution appears to 
> be to cap the number of retries or amount of time spent retrying.
> This issue is significant because of the asynchronous nature of job 
> submission.  If the active doesn't know it's not active, it will buffer up 
> job submissions until it finally realizes it has become the standby. Then it 
> will fail all the job submissions in bulk. In high-volume workflows, that 
> behavior can create huge mass job failures.
> This issue is also important because the node managers will not fail over to 
> the new active until the old active realizes it's the standby.  Workloads 
> submitted after the old active loses contact with ZK will therefore fail to 
> be executed regardless of which RM the clients contact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5675) Checkin swagger definition in the repo

2016-10-12 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5675:

Attachment: YARN-5675-yarn-native-services.001.patch

Submitting the "Hadoop YARN REST APIs for services v1 spec" as a patch, to be 
checked in to hadoop repo. It is in Swagger YAML format.

> Checkin swagger definition in the repo
> --
>
> Key: YARN-5675
> URL: https://issues.apache.org/jira/browse/YARN-5675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-5675-yarn-native-services.001.patch
>
>
> This task will be used to submit the REST API swagger definition (yaml 
> format) to be checked in to the repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569200#comment-15569200
 ] 

Arun Suresh commented on YARN-4597:
---

What i meant was, the current patch does not need synchronized collections ( 
unlike the {{QueuingContainerManager}} ), since it runs on the same thread as 
the ContainerManager's AsyncDispatcher.
I will update the patch with using a linkedblockingqueue shortly.

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5256) Add REST endpoint to support detailed NodeLabel Informations

2016-10-12 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569178#comment-15569178
 ] 

Naganarasimha G R commented on YARN-5256:
-

Thanks [~wangda], was expecting this hence waited :)
Yes i was of the same opinion when [~sunilg] came with the first patch. but 
later during discussion with him felt that once we come up with Constraints 
(YARN-3409) it might be required to treat them differently and separate API's 
to retrieve the constraints and the partition label info separately. So based 
on how the constraint label comes up we need to shape the API. As per the 
offline discussion with [~sunilg] he informed me that he requires a solution 
for this jira to meet the Newe UI dead lines so i am ok with going with the 
other approach.
Few options in this approach
* would be good to have both the {{labelName}} and {{associated nodes info}}  
as params instead of path variable (similar to current patch to fetch 
information for set of labels)
* Would be need to return the used resource info ? we need to fetch from 
scheduler ? i presume not required every time right? any plans in UI ?

Thoughts?

> Add REST endpoint to support detailed NodeLabel Informations
> 
>
> Key: YARN-5256
> URL: https://issues.apache.org/jira/browse/YARN-5256
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-5256-YARN-3368.1.patch, 
> YARN-5256-YARN-3368.2.patch, YARN-5256.0001.patch, YARN-5256.0002.patch, 
> YARN-5256.0003.patch
>
>
> Add a new REST endpoint to fetch few more detailed information about node 
> labels such as resource, list of nodes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569147#comment-15569147
 ] 

Jian He commented on YARN-4597:
---

bq.  This will preserve the serial nature of operation (and thereby keep the 
code simple by not needing synchronized collections)
I don't actually see a synchronized collection in the ContainerScheduler.  I 
agree with the point that this could avoid holding up the main containerManager 
thread. we can have ContainerScheduler create its own dispatcher.


> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5697) Use CliParser to parse options in RMAdminCLI

2016-10-12 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569146#comment-15569146
 ] 

Naganarasimha G R commented on YARN-5697:
-

Hi [~Tao Jie],
IMO we have never mentioned we support anywhere in this format {{"rmadmin 
-replaceLabelsOnNode -directlyAccessNodeLabelStore node1=label1"}} and ideally 
its wrong too, so i would suggest to use {{"node1=label1"}} as option values of 
{{replaceLabelsOnNode}} itself. (sorry to go back and forth on this). further 
there were talks on discarding option of {{directlyAccessNodeLabelStore}} which 
is used by none.



> Use CliParser to parse options in RMAdminCLI
> 
>
> Key: YARN-5697
> URL: https://issues.apache.org/jira/browse/YARN-5697
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
> Fix For: 2.8.0
>
> Attachments: YARN-5697.001.patch, YARN-5697.002.patch, 
> YARN-5697.003.patch
>
>
> As discussed in YARN-4855, it is better to use CliParser rather than args to 
> parse command line options in RMAdminCli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5677) RM should transition to standby when connection is lost for an extended period

2016-10-12 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569122#comment-15569122
 ] 

Karthik Kambatla commented on YARN-5677:


Committed this to trunk. 

The patch does not compile with branch-2. Looks like some type issues with 
{{any()}} in tests. [~templedf] - can you post a branch-2 patch as well? 

> RM should transition to standby when connection is lost for an extended period
> --
>
> Key: YARN-5677
> URL: https://issues.apache.org/jira/browse/YARN-5677
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5677.001.patch, YARN-5677.002.patch, 
> YARN-5677.003.patch, YARN-5677.004.patch, YARN-5677.005.patch
>
>
> In trunk, there is no maximum number of retries that I see.  It appears the 
> connection will be retried forever, with the active never figuring out it's 
> no longer active.  In my testing, the active-active state lasted almost 2 
> hours with no sign of stopping before I killed it.  The solution appears to 
> be to cap the number of retries or amount of time spent retrying.
> This issue is significant because of the asynchronous nature of job 
> submission.  If the active doesn't know it's not active, it will buffer up 
> job submissions until it finally realizes it has become the standby. Then it 
> will fail all the job submissions in bulk. In high-volume workflows, that 
> behavior can create huge mass job failures.
> This issue is also important because the node managers will not fail over to 
> the new active until the old active realizes it's the standby.  Workloads 
> submitted after the old active loses contact with ZK will therefore fail to 
> be executed regardless of which RM the clients contact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order

2016-10-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569074#comment-15569074
 ] 

Rohith Sharma K S commented on YARN-5715:
-

bq. As we are setting entity ID prefix to 0 and hence will carry it always, why 
not change it to primitive long instead of Long
Fair point!! Object is mandatory when we want to use in Collections. Here, I do 
not have strong opinion to use Long, may be I can change to primitive type 
long. 

> introduce entity prefix for return and sort order
> -
>
> Key: YARN-5715
> URL: https://issues.apache.org/jira/browse/YARN-5715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5715-YARN-5355.01.patch, 
> YARN-5715-YARN-5355.02.patch
>
>
> While looking into YARN-5585, we have come across the need to provide a sort 
> order different than the current entity id order. The current entity id order 
> returns entities strictly in the lexicographical order, and as such it 
> returns the earliest entities first. This may not be the most natural return 
> order. A more natural return/sort order would be from the most recent 
> entities.
> To solve this, we would like to add what we call the "entity prefix" in the 
> row key for the entity table. It is a number (long) that can be easily 
> provided by the client on write. In the row key, it would be added before the 
> entity id itself.
> The entity prefix would be considered mandatory. On all writes (including 
> updates) the correct entity prefix should be set by the client so that the 
> correct row key is used. The entity prefix needs to be unique only within the 
> scope of the application and the entity type.
> For queries that return a list of entities, the prefix values will be 
> returned along with the entity id's. Queries that specify the prefix and the 
> id should be returned quickly using the row key. If the query omits the 
> prefix but specifies the id (query by id), the query may be less efficient.
> This JIRA should add the entity prefix to the entity API and add its handling 
> to the schema and the write path. The read path will be addressed in 
> YARN-5585.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order

2016-10-12 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569043#comment-15569043
 ] 

Varun Saxena commented on YARN-5715:


Sorry hadn't read the description. Read part will be done in YARN-5585 so we do 
not need to send id prefix back in response as part of this JIRA.

> introduce entity prefix for return and sort order
> -
>
> Key: YARN-5715
> URL: https://issues.apache.org/jira/browse/YARN-5715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5715-YARN-5355.01.patch, 
> YARN-5715-YARN-5355.02.patch
>
>
> While looking into YARN-5585, we have come across the need to provide a sort 
> order different than the current entity id order. The current entity id order 
> returns entities strictly in the lexicographical order, and as such it 
> returns the earliest entities first. This may not be the most natural return 
> order. A more natural return/sort order would be from the most recent 
> entities.
> To solve this, we would like to add what we call the "entity prefix" in the 
> row key for the entity table. It is a number (long) that can be easily 
> provided by the client on write. In the row key, it would be added before the 
> entity id itself.
> The entity prefix would be considered mandatory. On all writes (including 
> updates) the correct entity prefix should be set by the client so that the 
> correct row key is used. The entity prefix needs to be unique only within the 
> scope of the application and the entity type.
> For queries that return a list of entities, the prefix values will be 
> returned along with the entity id's. Queries that specify the prefix and the 
> id should be returned quickly using the row key. If the query omits the 
> prefix but specifies the id (query by id), the query may be less efficient.
> This JIRA should add the entity prefix to the entity API and add its handling 
> to the schema and the write path. The read path will be addressed in 
> YARN-5585.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order

2016-10-12 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15569033#comment-15569033
 ] 

Varun Saxena commented on YARN-5715:


Thanks [~rohithsharma] for the patch.

As we are setting entity ID prefix to 0 and hence will carry it always, why not 
change it to primitive long instead of Long. That will ward off unnecessary 
boxing/unboxing. Is Long version required somewhere in the code flow ?
If yes in TimelineEntity we should use Long.valueOf instead of new Long to 
initialize it.

Moreover, in Generic Entity reader we need to extract entity ID prefix from row 
key and set it back in response.

> introduce entity prefix for return and sort order
> -
>
> Key: YARN-5715
> URL: https://issues.apache.org/jira/browse/YARN-5715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5715-YARN-5355.01.patch, 
> YARN-5715-YARN-5355.02.patch
>
>
> While looking into YARN-5585, we have come across the need to provide a sort 
> order different than the current entity id order. The current entity id order 
> returns entities strictly in the lexicographical order, and as such it 
> returns the earliest entities first. This may not be the most natural return 
> order. A more natural return/sort order would be from the most recent 
> entities.
> To solve this, we would like to add what we call the "entity prefix" in the 
> row key for the entity table. It is a number (long) that can be easily 
> provided by the client on write. In the row key, it would be added before the 
> entity id itself.
> The entity prefix would be considered mandatory. On all writes (including 
> updates) the correct entity prefix should be set by the client so that the 
> correct row key is used. The entity prefix needs to be unique only within the 
> scope of the application and the entity type.
> For queries that return a list of entities, the prefix values will be 
> returned along with the entity id's. Queries that specify the prefix and the 
> id should be returned quickly using the row key. If the query omits the 
> prefix but specifies the id (query by id), the query may be less efficient.
> This JIRA should add the entity prefix to the entity API and add its handling 
> to the schema and the write path. The read path will be addressed in 
> YARN-5585.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order

2016-10-12 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568985#comment-15568985
 ] 

Rohith Sharma K S commented on YARN-5715:
-

Pending :
# need up update same behavior for FileSystemTimelineWriterImpl also.

> introduce entity prefix for return and sort order
> -
>
> Key: YARN-5715
> URL: https://issues.apache.org/jira/browse/YARN-5715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5715-YARN-5355.01.patch, 
> YARN-5715-YARN-5355.02.patch
>
>
> While looking into YARN-5585, we have come across the need to provide a sort 
> order different than the current entity id order. The current entity id order 
> returns entities strictly in the lexicographical order, and as such it 
> returns the earliest entities first. This may not be the most natural return 
> order. A more natural return/sort order would be from the most recent 
> entities.
> To solve this, we would like to add what we call the "entity prefix" in the 
> row key for the entity table. It is a number (long) that can be easily 
> provided by the client on write. In the row key, it would be added before the 
> entity id itself.
> The entity prefix would be considered mandatory. On all writes (including 
> updates) the correct entity prefix should be set by the client so that the 
> correct row key is used. The entity prefix needs to be unique only within the 
> scope of the application and the entity type.
> For queries that return a list of entities, the prefix values will be 
> returned along with the entity id's. Queries that specify the prefix and the 
> id should be returned quickly using the row key. If the query omits the 
> prefix but specifies the id (query by id), the query may be less efficient.
> This JIRA should add the entity prefix to the entity API and add its handling 
> to the schema and the write path. The read path will be addressed in 
> YARN-5585.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5683) Support specifying storage type for per-application local dirs

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568971#comment-15568971
 ] 

Hadoop QA commented on YARN-5683:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 47s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 7s 
{color} | {color:red} root: The patch generated 7 new + 940 unchanged - 9 fixed 
= 947 total (was 949) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 11s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 49s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 57s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 103m 34s 
{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 205m 0s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832871/YARN-5683-3.patch |
| JIRA Issue | YARN-5683 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 76b8f8dfda17 3.13.0-36-lowlatency #63-Ubuntu SMP 

[jira] [Updated] (YARN-5715) introduce entity prefix for return and sort order

2016-10-12 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5715:

Attachment: YARN-5715-YARN-5355.02.patch

Updated patch to reflect write path.. The patch has following changes.
# Added *idPrefix* in TimelineEntity object with default value as 0. 
# If user tries to set null for idPrefix to mess up, collector will take care 
of going with 0 before encoding row key.

> introduce entity prefix for return and sort order
> -
>
> Key: YARN-5715
> URL: https://issues.apache.org/jira/browse/YARN-5715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5715-YARN-5355.01.patch, 
> YARN-5715-YARN-5355.02.patch
>
>
> While looking into YARN-5585, we have come across the need to provide a sort 
> order different than the current entity id order. The current entity id order 
> returns entities strictly in the lexicographical order, and as such it 
> returns the earliest entities first. This may not be the most natural return 
> order. A more natural return/sort order would be from the most recent 
> entities.
> To solve this, we would like to add what we call the "entity prefix" in the 
> row key for the entity table. It is a number (long) that can be easily 
> provided by the client on write. In the row key, it would be added before the 
> entity id itself.
> The entity prefix would be considered mandatory. On all writes (including 
> updates) the correct entity prefix should be set by the client so that the 
> correct row key is used. The entity prefix needs to be unique only within the 
> scope of the application and the entity type.
> For queries that return a list of entities, the prefix values will be 
> returned along with the entity id's. Queries that specify the prefix and the 
> id should be returned quickly using the row key. If the query omits the 
> prefix but specifies the id (query by id), the query may be less efficient.
> This JIRA should add the entity prefix to the entity API and add its handling 
> to the schema and the write path. The read path will be addressed in 
> YARN-5585.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5723) Proxy Server unresponsive after printing 'accessing unchecked' multiple times

2016-10-12 Thread JIRA
Zoltán Zvara created YARN-5723:
--

 Summary: Proxy Server unresponsive after printing 'accessing 
unchecked' multiple times
 Key: YARN-5723
 URL: https://issues.apache.org/jira/browse/YARN-5723
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
 Environment: Latest alpha2-SNAPSHOT running in Ubuntu 16.04 Docker 
containers.
Reporter: Zoltán Zvara


After the first request to the Proxy Server against Spark 2.0.0 web UI, the 
following logs appear:

{{2016-10-12 14:43:54,998 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476283324979_0002 owned by Ehnalis
2016-10-12 14:43:55,007 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476283324979_0002 owned by Ehnalis
2016-10-12 14:43:55,015 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476283324979_0002 owned by Ehnalis
2016-10-12 14:43:55,024 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476283324979_0002 owned by Ehnalis
2016-10-12 14:43:55,033 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476283324979_0002 owned by Ehnalis
2016-10-12 14:43:55,042 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476283324979_0002 owned by Ehnalis
2016-10-12 14:43:55,051 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476283324979_0002 owned by Ehnalis
2016-10-12 14:43:55,059 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476283324979_0002 owned by Ehnalis}}

The Proxy Server is going to be unresponsive after these messages. If the Proxy 
Server is not on a standalone port, but instead with the RM, the default YARN 
web UI goes unresponsive as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-1890) Too many unnecessary logs are logged while accessing applicationMaster web UI.

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568831#comment-15568831
 ] 

Zoltán Zvara edited comment on YARN-1890 at 10/12/16 2:10 PM:
--

I'm having the same issue with the latest alpha2-SNAPSHOT.

{{2016-10-12 14:05:57,499 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,507 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,514 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,521 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,529 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,536 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,544 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,551 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,558 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,566 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,573 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,581 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,589 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,596 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,603 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,611 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,618 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,626 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,634 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,642 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,649 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 

[jira] [Commented] (YARN-1890) Too many unnecessary logs are logged while accessing applicationMaster web UI.

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568831#comment-15568831
 ] 

Zoltán Zvara commented on YARN-1890:


I'm having the same issue with the latest alpha2-SNAPSHOT.

{{
2016-10-12 14:05:57,499 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,507 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,514 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,521 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,529 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,536 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,544 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,551 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,558 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,566 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,573 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,581 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,589 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,596 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,603 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,611 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,618 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,626 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,634 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,642 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,649 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 

[jira] [Comment Edited] (YARN-1890) Too many unnecessary logs are logged while accessing applicationMaster web UI.

2016-10-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568831#comment-15568831
 ] 

Zoltán Zvara edited comment on YARN-1890 at 10/12/16 2:11 PM:
--

I'm having the same issue with the latest alpha2-SNAPSHOT.

2016-10-12 14:05:57,499 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,507 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,514 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,521 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,529 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,536 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,544 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,551 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,558 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,566 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,573 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,581 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,589 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,596 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,603 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,611 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,618 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,626 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,634 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,642 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 
application_1476280084610_0003 owned by Ehnalis
2016-10-12 14:05:57,649 INFO 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet: dr.who is accessing 
unchecked http://10.1.1.240:4040 which is the app master GUI of 

[jira] [Comment Edited] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568792#comment-15568792
 ] 

Arun Suresh edited comment on YARN-4597 at 10/12/16 1:56 PM:
-

Thanks for taking a look [~jianhe],

bq. Wondering why KillWhileExitingTransition is added..
I had put it in there for debugging something... Left it there since I thought 
it was harmless... but, yeah looks like it does over-ride the exitcode. Will 
remove it. Good catch.

* w.r.t {{ContainerState#SCHEDULED}} : Actually, I think we should expose this. 
We currently club NEW, LOCALIZING, LOCALIZED etc. into RUNNING, but the 
container is actually not running, and is thus misleading. SCHEDULED implies 
that some of the containers dependencies (resources for localization + some 
internal queuing/scheduling policy) have not yet been met.
Prior to this, YARN-2877 had introduced the QUEUED return state. This would be 
visible to applications, if Queuing was enabled. This patch technically just 
renames QUEUED to SCHEDULED. Also, all containers will go thru the SCHEDULED 
state, not just the opportunistic ones (although, for guaranteed containers 
this will just be a pass-thru state)

Another thing I was hoping for some input was, currently, the 
{{ContainerScheduler}} runs in the same thread as the ContainerManager's 
AsyncDispatcher started by the ContainerManager. Also, the Scheduler is 
triggered only by events. I was wondering if there is any merit pushing these 
events into a blocking queue as they arrive and have a separate thread take 
care of them. This will preserve the serial nature of operation (and thereby 
keep the code simple by not needing synchronized collections) and will not hold 
up the dispatcher from delivering other events while the scheduler is 
scheduling.
A minor disadvantage, is that the NM will probably consume a thread that for 
the most part will be blocked on the queue. This thread could be used by one of 
the containers.


was (Author: asuresh):
Thanks for taking a look [~jianhe],

bq. Wondering why KillWhileExitingTransition is added..
I had put it in there for debugging something... Left it there since it thought 
its harmless... but, yeah looks like it does over-ride the exitcode. Will 
remove it. Good catch.

* w.r.t {{ContainerState#SCHEDULED}} : Actually, I think we should expose this. 
We currently club NEW, LOCALIZING, LOCALIZED etc. into RUNNING, but the 
container is actually not running, and is thus misleading. SCHEDULED implies 
that some of the containers dependencies (resources for localization + some 
internal queuing/scheduling policy) have not yet been met.
Prior to this, YARN-2877 had introduced the QUEUED return state. This would be 
visible to applications, if Queuing was enabled. This patch technically just 
renames QUEUED to SCHEDULED. Also, all containers will go thru the SCHEDULED 
state, not just the opportunistic ones (although, for guaranteed containers 
this will just be a pass-thru state)

Another thing I was hoping for some input was, currently, the 
{{ContainerScheduler}} runs in the same thread as the ContainerManager's 
AsyncDispatcher started by the ContainerManager. Also, the Scheduler is 
triggered only by events. I was wondering if there is any merit pushing these 
events into a blocking queue as they arrive and have a separate thread take 
care of them. This will preserve the serial nature of operation (and thereby 
keep the code simple by not needing synchronized collections) and will not hold 
up the dispatcher from delivering other events while the scheduler is 
scheduling.
A minor disadvantage, is that the NM will probably consume a thread that for 
the most part will be blocked on the queue. This thread could be used by one of 
the containers.

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-12 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568792#comment-15568792
 ] 

Arun Suresh commented on YARN-4597:
---

Thanks for taking a look [~jianhe],

bq. Wondering why KillWhileExitingTransition is added..
I had put it in there for debugging something... Left it there since it thought 
its harmless... but, yeah looks like it does over-ride the exitcode. Will 
remove it. Good catch.

* w.r.t {{ContainerState#SCHEDULED}} : Actually, I think we should expose this. 
We currently club NEW, LOCALIZING, LOCALIZED etc. into RUNNING, but the 
container is actually not running, and is thus misleading. SCHEDULED implies 
that some of the containers dependencies (resources for localization + some 
internal queuing/scheduling policy) have not yet been met.
Prior to this, YARN-2877 had introduced the QUEUED return state. This would be 
visible to applications, if Queuing was enabled. This patch technically just 
renames QUEUED to SCHEDULED. Also, all containers will go thru the SCHEDULED 
state, not just the opportunistic ones (although, for guaranteed containers 
this will just be a pass-thru state)

Another thing I was hoping for some input was, currently, the 
{{ContainerScheduler}} runs in the same thread as the ContainerManager's 
AsyncDispatcher started by the ContainerManager. Also, the Scheduler is 
triggered only by events. I was wondering if there is any merit pushing these 
events into a blocking queue as they arrive and have a separate thread take 
care of them. This will preserve the serial nature of operation (and thereby 
keep the code simple by not needing synchronized collections) and will not hold 
up the dispatcher from delivering other events while the scheduler is 
scheduling.
A minor disadvantage, is that the NM will probably consume a thread that for 
the most part will be blocked on the queue. This thread could be used by one of 
the containers.

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5708) Implement APIs to get resource profiles from the RM

2016-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15568759#comment-15568759
 ] 

Hadoop QA commented on YARN-5708:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
13s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
29s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 4s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
26s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
40s {color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s 
{color} | {color:green} YARN-3926 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s 
{color} | {color:red} root: The patch generated 33 new + 314 unchanged - 0 
fixed = 347 total (was 314) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 24s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 3s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 41m 45s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 42s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 111m 54s 
{color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 232m 33s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
|   | hadoop.mapred.TestMRCJCFileOutputCommitter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 

  1   2   >