[jira] [Updated] (YARN-2009) N B

2016-10-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2009:
--
Summary: N  B   (was: Priority support for preemption in 
ProportionalCapacityPreemptionPolicy)

> N  B 
> -
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5679) TestAHSWebServices is failing

2016-10-14 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-5679:

Attachment: YARN-5679.02.patch

> TestAHSWebServices is failing
> -
>
> Key: YARN-5679
> URL: https://issues.apache.org/jira/browse/YARN-5679
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5679.01.patch, YARN-5679.02.patch
>
>
> TestAHSWebServices.testContainerLogsForFinishedApps is failing.
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/176/testReport/
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.createContainerLogInLocalDir(TestAHSWebServices.java:675)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.testContainerLogsForFinishedApps(TestAHSWebServices.java:581)
> {noformat}
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.testContainerLogsForFinishedApps(TestAHSWebServices.java:519)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5679) TestAHSWebServices is failing

2016-10-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577413#comment-15577413
 ] 

Akira Ajisaka commented on YARN-5679:
-

Thanks [~miklos.szeg...@cloudera.com] for the comment!

bq. setUMask will change the umask in the Configuration object, which may be 
picked up by any other thread in the process using the same configuration 
object before it is reverted.
Agreed.

bq. One option is to create a private configuration object to change the umask, 
the other is to apply the both umasks to a permission object and create the 
file with this permission object set.
I chose the latter option because creating a private configuration object for 
each log costs too much.


> TestAHSWebServices is failing
> -
>
> Key: YARN-5679
> URL: https://issues.apache.org/jira/browse/YARN-5679
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5679.01.patch, YARN-5679.02.patch
>
>
> TestAHSWebServices.testContainerLogsForFinishedApps is failing.
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/176/testReport/
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.createContainerLogInLocalDir(TestAHSWebServices.java:675)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.testContainerLogsForFinishedApps(TestAHSWebServices.java:581)
> {noformat}
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.testContainerLogsForFinishedApps(TestAHSWebServices.java:519)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577341#comment-15577341
 ] 

Hadoop QA commented on YARN-5145:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 56s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b17 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833488/YARN-5145-YARN-3368.05.patch
 |
| JIRA Issue | YARN-5145 |
| Optional Tests |  asflicense  |
| uname | Linux c5e0c664e5cb 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 164a3c2 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13402/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> YARN-5145-YARN-3368.05.patch, newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5145:
--
Attachment: YARN-5145-YARN-3368.05.patch

Thanks [~Sreenath]

Addressed the comments.. However I think we can compare only 
{{localhost/0.0.0.0}} for now.. Handling 127.*.*.* family of address may not be 
veery clean. Thoughts /cc [~leftnoteasy]

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> YARN-5145-YARN-3368.05.patch, newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-14 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577129#comment-15577129
 ] 

Sunil G edited comment on YARN-2009 at 10/15/16 1:55 AM:
-

Thanks [~eepayne]. You are correct.
I also ran into a similar point yesterday and found root cause is because we 
subtract {{tmpApp.getAMUsed()}} in all cases. Ideally we must deduct it only 
when one app's resources are fully needed for preemption demand from other high 
apps. 

I found a solution to have something similar to 
{noformat}
  if (Resources.lessThan(rc, clusterResource,
Resources.subtract(tmpApp.getUsed(), preemtableFromApp),
tmpApp.getAMUsed())) {
Resources.subtractFrom(preemtableFromApp, tmpApp.getAMUsed());
  }
{noformat}

I think this can be placed in 
{{FifoIntraQueuePreemptionPlugin.validateOutSameAppPriorityFromDemand}}, so we 
can ensure that we will deduct AMUsed only when one app's resource is needed 
fully for preemption. Else we may not needed to consider the same. I am 
preparing for UTs also to cover this cases.


was (Author: sunilg):
Thanks [~eepayne]
I also ran into a similar point yesterday and found root cause is because we 
subtract {{tmpApp.getAMUsed()}}. 

I found a solution to have something similar to 
{{noformat}}
  if (Resources.lessThan(rc, clusterResource,
Resources.subtract(tmpApp.getUsed(), preemtableFromApp),
tmpApp.getAMUsed())) {
Resources.subtractFrom(preemtableFromApp, tmpApp.getAMUsed());
  }
{{noformat}}

I think this can be placed in 
{{FifoIntraQueuePreemptionPlugin.validateOutSameAppPriorityFromDemand}}, so we 
can ensure that we will deduct AMUsed only when one app's resource is needed 
fully for preemption. Else we may not needed to consider the same. I am 
preparing for UTs also to cover this cases.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-14 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577129#comment-15577129
 ] 

Sunil G commented on YARN-2009:
---

Thanks [~eepayne]
I also ran into a similar point yesterday and found root cause is because we 
subtract {{tmpApp.getAMUsed()}}. 

I found a solution to have something similar to 
{{noformat}}
  if (Resources.lessThan(rc, clusterResource,
Resources.subtract(tmpApp.getUsed(), preemtableFromApp),
tmpApp.getAMUsed())) {
Resources.subtractFrom(preemtableFromApp, tmpApp.getAMUsed());
  }
{{noformat}}

I think this can be placed in 
{{FifoIntraQueuePreemptionPlugin.validateOutSameAppPriorityFromDemand}}, so we 
can ensure that we will deduct AMUsed only when one app's resource is needed 
fully for preemption. Else we may not needed to consider the same. I am 
preparing for UTs also to cover this cases.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-10-14 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577111#comment-15577111
 ] 

Rohith Sharma K S commented on YARN-5739:
-

Thanks Li Lu for raising a JIRA. This is one of the many folks major concern(I 
missed too in meetings) while I was briefing about REST end points available in 
ATSv2. They are expecting somewhat like REST end point like 
*/apps/appid/entities*.

One option would be maintaining separate mapping table 
application->list-of-all-entites. If we scan entity table then it is required 
to filter it out by reader since it scans all the application/ row-keys. May be 
need to discuss more on this supporting this API and its solutions.

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5740) Add a new field in Slider status output - lifetime (remaining)

2016-10-14 Thread Gour Saha (JIRA)
Gour Saha created YARN-5740:
---

 Summary: Add a new field in Slider status output - lifetime 
(remaining)
 Key: YARN-5740
 URL: https://issues.apache.org/jira/browse/YARN-5740
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha
 Fix For: yarn-native-services


With YARN-5735, REST service is now setting lifetime to application during 
submission (YARN-4205 exposed application lifetime support). Now Slider status 
needs to expose this field so that the REST service can return it in its GET 
response. Note, the lifetime value that GET response intends to return is the 
remaining lifetime of the application. 

There is an ongoing discussion in YARN-4206, that the lifetime value returned 
in Application Report will be the remaining lifetime (at the time of request). 
So until it is finalized, the lifetime value might go through different 
connotations. But as long as we have the lifetime field in the status output, 
it will be a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-14 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577056#comment-15577056
 ] 

Rohith Sharma K S commented on YARN-5699:
-

Yes, current patch given based on trunk. Sure, I will upload patch for 
YARN-5355 branch. 

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch, 0003-YARN-5699.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5690) Integrate native services modules into maven build

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577018#comment-15577018
 ] 

Hadoop QA commented on YARN-5690:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
35s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 50s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
8s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 5m 5s 
{color} | {color:red} hadoop-yarn in yarn-native-services failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
16s {color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 51s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 317 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 32s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 in yarn-native-services has 5 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 59s 
{color} | {color:red} hadoop-yarn in yarn-native-services failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 39s 
{color} | {color:red} root: The patch generated 1 new + 584 unchanged - 6 fixed 
= 585 total (was 590) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 4m 11s 
{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
15s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 8s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-assemblies hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| 

[jira] [Commented] (YARN-5719) Enforce a C standard for native container-executor

2016-10-14 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576958#comment-15576958
 ] 

Allen Wittenauer commented on YARN-5719:


I'll try to run this through a few different OSes when I get a chance.  Offhand 
it looks fine, but want to make sure FreeBSD and Illumos work.

> Enforce a C standard for native container-executor
> --
>
> Key: YARN-5719
> URL: https://issues.apache.org/jira/browse/YARN-5719
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Chris Douglas
> Attachments: YARN-5719.000.patch
>
>
> The {{container-executor}} build should declare the C standard it uses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5679) TestAHSWebServices is failing

2016-10-14 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576908#comment-15576908
 ] 

Miklos Szegedi commented on YARN-5679:
--

I think there is an issue there. setUMask will change the umask in the 
Configuration object, which may be picked up by any other thread in the process 
using the same configuration object before it is reverted.
One option is to create a private configuration object to change the umask, the 
other is to apply the both umasks to a permission object and create the file 
with this permission object set.

> TestAHSWebServices is failing
> -
>
> Key: YARN-5679
> URL: https://issues.apache.org/jira/browse/YARN-5679
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5679.01.patch
>
>
> TestAHSWebServices.testContainerLogsForFinishedApps is failing.
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/176/testReport/
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.createContainerLogInLocalDir(TestAHSWebServices.java:675)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.testContainerLogsForFinishedApps(TestAHSWebServices.java:581)
> {noformat}
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.testContainerLogsForFinishedApps(TestAHSWebServices.java:519)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4266) Allow whitelisted users to disable user re-mapping/squashing when launching docker containers

2016-10-14 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576903#comment-15576903
 ] 

Sidharta Seethana commented on YARN-4266:
-

Usermod seems to be of limited use. From usermod's man page : 

{code}
-u, --uid UID
   The new numerical value of the user's ID.

   This value must be unique, unless the -o option is used. The value 
must be non-negative.

   The user's mailbox, and any files which the user owns and which are 
located in the user's home directory will have the file user ID changed 
automatically.

   The ownership of files outside of the user's home directory must be 
fixed manually.

   No checks will be performed with regard to the UID_MIN, UID_MAX, 
SYS_UID_MIN, or SYS_UID_MAX from /etc/login.defs.
{code}

If nothing outside the user's home directory is updated, this is likely to 
break many images that use custom/non-root users ? 







> Allow whitelisted users to disable user re-mapping/squashing when launching 
> docker containers
> -
>
> Key: YARN-4266
> URL: https://issues.apache.org/jira/browse/YARN-4266
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-4266-branch-2.8.001.patch, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v2.pdf, 
> YARN-4266_Allow_whitelisted_users_to_disable_user_re-mapping_v3.pdf
>
>
> Docker provides a mechanism (the --user switch) that enables us to specify 
> the user the container processes should run as. We use this mechanism today 
> when launching docker containers . In non-secure mode, we run the docker 
> container based on 
> `yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user` and in 
> secure mode, as the submitting user. However, this mechanism breaks down with 
> a large number of 'pre-created' images which don't necessarily have the users 
> available within the image. Examples of such images include shared images 
> that need to be used by multiple users. We need a way in which we can allow a 
> pre-defined set of users to run containers based on existing images, without 
> using the --user switch. There are some implications of disabling this user 
> squashing that we'll need to work through : log aggregation, artifact 
> deletion etc.,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576895#comment-15576895
 ] 

Sangjin Lee commented on YARN-5699:
---

The latest patch LGTM. I'll wait until tomorrow to give folks time to chime in 
before I commit.

[~rohithsharma], just to be clear, is the intent to commit this to trunk? If I 
take the latest patch 
(https://issues.apache.org/jira/secure/attachment/12833383/0003-YARN-5699.patch)
 it does not apply cleanly to the YARN-5355 branch. Could you please upload the 
corresponding YARN-5355 patch?

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch, 0003-YARN-5699.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5690) Integrate native services modules into maven build

2016-10-14 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5690:
-
Attachment: YARN-5690-yarn-native-services.003.patch

> Integrate native services modules into maven build
> --
>
> Key: YARN-5690
> URL: https://issues.apache.org/jira/browse/YARN-5690
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-5690-yarn-native-services.001.patch, 
> YARN-5690-yarn-native-services.002.patch, 
> YARN-5690-yarn-native-services.003.patch
>
>
> The yarn dist assembly should include jars for the new modules as well as 
> their new dependencies. We may want to create new lib directories in the 
> tarball for the dependencies of the slider-core and services API modules, to 
> avoid adding these dependencies into the general YARN classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5391) FederationPolicy implementations (tieing together RouterFederationPolicy and AMRMProxyFederationPolicy)

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576802#comment-15576802
 ] 

Hadoop QA commented on YARN-5391:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
15s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 13 new + 0 unchanged - 0 fixed = 13 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 |
|  |  Call to Class.equals(String) in 
org.apache.hadoop.yarn.server.federation.policies.BasePolicyManager.internalPolicyGetter(FederationPolicyInitializationContext,
 ConfigurableFederationPolicy, String)  At 
BasePolicyManager.java:ConfigurableFederationPolicy, String)  At 
BasePolicyManager.java:[line 136] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833462/YARN-5391-YARN-2915.05.patch
 |
| JIRA Issue | YARN-5391 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 522e6c33f977 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / def8211 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13400/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
| findbugs | 

[jira] [Updated] (YARN-5654) Not be able to run SLS with FairScheduler

2016-10-14 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5654:
-
Attachment: (was: YARN-5654.1.patch)

> Not be able to run SLS with FairScheduler
> -
>
> Key: YARN-5654
> URL: https://issues.apache.org/jira/browse/YARN-5654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5654.1.patch
>
>
> With the config:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/configs/hadoop-conf-fs
> And data:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/data/scheduler-load-test-data
> Capacity Scheduler runs fine, but Fair Scheduler cannot be successfully run. 
> It reports NPE from RMAppAttemptImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5654) Not be able to run SLS with FairScheduler

2016-10-14 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5654:
-
Attachment: YARN-5654.1.patch

> Not be able to run SLS with FairScheduler
> -
>
> Key: YARN-5654
> URL: https://issues.apache.org/jira/browse/YARN-5654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5654.1.patch
>
>
> With the config:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/configs/hadoop-conf-fs
> And data:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/data/scheduler-load-test-data
> Capacity Scheduler runs fine, but Fair Scheduler cannot be successfully run. 
> It reports NPE from RMAppAttemptImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5654) Not be able to run SLS with FairScheduler

2016-10-14 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5654:
-
Attachment: (was: YARN-5654.1.patch)

> Not be able to run SLS with FairScheduler
> -
>
> Key: YARN-5654
> URL: https://issues.apache.org/jira/browse/YARN-5654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5654.1.patch
>
>
> With the config:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/configs/hadoop-conf-fs
> And data:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/data/scheduler-load-test-data
> Capacity Scheduler runs fine, but Fair Scheduler cannot be successfully run. 
> It reports NPE from RMAppAttemptImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5654) Not be able to run SLS with FairScheduler

2016-10-14 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5654:
-
Attachment: YARN-5654.1.patch

> Not be able to run SLS with FairScheduler
> -
>
> Key: YARN-5654
> URL: https://issues.apache.org/jira/browse/YARN-5654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5654.1.patch
>
>
> With the config:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/configs/hadoop-conf-fs
> And data:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/data/scheduler-load-test-data
> Capacity Scheduler runs fine, but Fair Scheduler cannot be successfully run. 
> It reports NPE from RMAppAttemptImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5654) Not be able to run SLS with FairScheduler

2016-10-14 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5654:
-
Attachment: YARN-5654.1.patch

> Not be able to run SLS with FairScheduler
> -
>
> Key: YARN-5654
> URL: https://issues.apache.org/jira/browse/YARN-5654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5654.1.patch
>
>
> With the config:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/configs/hadoop-conf-fs
> And data:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/data/scheduler-load-test-data
> Capacity Scheduler runs fine, but Fair Scheduler cannot be successfully run. 
> It reports NPE from RMAppAttemptImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5654) Not be able to run SLS with FairScheduler

2016-10-14 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576788#comment-15576788
 ] 

Wangda Tan commented on YARN-5654:
--

Hi [~yufeigu],

I have just finished the FS patch for SLS before realizing this patch is 
already assigned to you, sorry for that.

Uploaded ver.1 patch, if you have interests/bandwidth to move on, please feel 
free keeping assign this to you and I can help with review.

> Not be able to run SLS with FairScheduler
> -
>
> Key: YARN-5654
> URL: https://issues.apache.org/jira/browse/YARN-5654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
>
> With the config:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/configs/hadoop-conf-fs
> And data:
> https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/data/scheduler-load-test-data
> Capacity Scheduler runs fine, but Fair Scheduler cannot be successfully run. 
> It reports NPE from RMAppAttemptImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-10-14 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576780#comment-15576780
 ] 

Li Lu commented on YARN-5638:
-

Thanks [~sjlee0]! Let me rebase my patch for YARN-3359 and update there. 

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Fix For: YARN-5355
>
> Attachments: YARN-5638-YARN-5355.v4.patch, 
> YARN-5638-YARN-5355.v5.patch, YARN-5638-trunk.v1.patch, 
> YARN-5638-trunk.v2.patch, YARN-5638-trunk.v3.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-10-14 Thread Li Lu (JIRA)
Li Lu created YARN-5739:
---

 Summary: Provide timeline reader API to list available timeline 
entity types for one application
 Key: YARN-5739
 URL: https://issues.apache.org/jira/browse/YARN-5739
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelinereader
Reporter: Li Lu
Assignee: Li Lu


Right now we only show a part of available timeline entity data in the new YARN 
UI. However, some data (especially library specific data) are not possible to 
be queried out by the web UI. It will be appealing for the UI to provide an 
"entity browser" for each YARN application. Actually, simply dumping out 
available timeline entities (with proper pagination, of course) would be pretty 
helpful for UI users. 

On timeline side, we're not far away from this goal. Right now I believe the 
only thing missing is to list all available entity types within one 
application. The challenge here is that we're not storing this data for each 
application, but given this kind of call is relatively rare (compare to writes 
and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5391) FederationPolicy implementations (tieing together RouterFederationPolicy and AMRMProxyFederationPolicy)

2016-10-14 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5391:
---
Attachment: YARN-5391-YARN-2915.05.patch

Initial (rough) version of the patch, rebased after YARN-5325 commit. Likely 
need some polish.

> FederationPolicy implementations (tieing together RouterFederationPolicy and 
> AMRMProxyFederationPolicy)
> ---
>
> Key: YARN-5391
> URL: https://issues.apache.org/jira/browse/YARN-5391
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5391-YARN-2915.04.patch, 
> YARN-5391-YARN-2915.05.patch, YARN-5391.01.patch, YARN-5391.02.patch, 
> YARN-5391.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-14 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576735#comment-15576735
 ] 

Eric Payne commented on YARN-2009:
--

Hi [~sunilg]. I think I would suggest to do the following:
{code:title=current 
FifoIntraQueuePreemptionPlugin#calculateToBePreemptedResourcePerApp}
  Resources.subtractFrom(preemtableFromApp, tmpApp.getAMUsed());
{code}

{code:title=suggested 
FifoIntraQueuePreemptionPlugin#calculateToBePreemptedResourcePerApp}
  if (Resources.lessThan(rc, clusterResource,
Resources.subtract(tmpApp.getUsed(), preemtableFromApp),
tmpApp.getAMUsed())) {
Resources.subtractFrom(preemtableFromApp, tmpApp.getAMUsed());
  }
{code}

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-10-14 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5638:
--
Hadoop Flags: Reviewed

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Fix For: YARN-5355
>
> Attachments: YARN-5638-YARN-5355.v4.patch, 
> YARN-5638-YARN-5355.v5.patch, YARN-5638-trunk.v1.patch, 
> YARN-5638-trunk.v2.patch, YARN-5638-trunk.v3.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5704) Provide config knobs to control enabling/disabling new/work in progress features in container-executor

2016-10-14 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576609#comment-15576609
 ] 

Chris Douglas commented on YARN-5704:
-

[~vvasudev] would you mind taking a look at YARN-5719 so we can enforce C99 (or 
whatever) for CE?

> Provide config knobs to control enabling/disabling new/work in progress 
> features in container-executor
> --
>
> Key: YARN-5704
> URL: https://issues.apache.org/jira/browse/YARN-5704
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5704-branch-2.8.001.patch, YARN-5704.001.patch
>
>
> Provide a mechanism to enable/disable Docker and TC (Traffic Control) 
> functionality at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-10-14 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576570#comment-15576570
 ] 

Li Lu commented on YARN-5638:
-

[~sjlee0] Let's target this to 5355 branch only? I don't think there is an 
urgent need to fix anything for our existing code in trunk... 

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5638-YARN-5355.v4.patch, 
> YARN-5638-YARN-5355.v5.patch, YARN-5638-trunk.v1.patch, 
> YARN-5638-trunk.v2.patch, YARN-5638-trunk.v3.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-10-14 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576553#comment-15576553
 ] 

Carlo Curino commented on YARN-5325:


Thanks [~subru] for reviewing and checkstyle fixes.

> Stateless ARMRMProxy policies implementation
> 
>
> Key: YARN-5325
> URL: https://issues.apache.org/jira/browse/YARN-5325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: YARN-2915
>
> Attachments: YARN-5325-YARN-2915.05.patch, 
> YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, 
> YARN-5325-YARN-2915.08.patch, YARN-5325-YARN-2915.09.patch, 
> YARN-5325-YARN-2915.10.patch, YARN-5325-YARN-2915.11.patch, 
> YARN-5325-YARN-2915.12.patch, YARN-5325-YARN-2915.13.patch, 
> YARN-5325-YARN-2915.14.patch, YARN-5325.01.patch, YARN-5325.02.patch, 
> YARN-5325.03.patch, YARN-5325.04.patch
>
>
> This JIRA tracks policies in the AMRMProxy that decide how to forward 
> ResourceRequests, without maintaining substantial state across decissions 
> (e.g., broadcast).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-10-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576555#comment-15576555
 ] 

Sangjin Lee commented on YARN-5638:
---

I'll merge this now to YARN-5355 and its branch-2 branch. [~gtCarrera9], should 
this be merged to trunk?

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5638-YARN-5355.v4.patch, 
> YARN-5638-YARN-5355.v5.patch, YARN-5638-trunk.v1.patch, 
> YARN-5638-trunk.v2.patch, YARN-5638-trunk.v3.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5716) Add global scheduler interface definition and update CapacityScheduler to use it.

2016-10-14 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576530#comment-15576530
 ] 

Jian He commented on YARN-5716:
---

- why volatile
{code}
private volatile BlockingQueue>
backlogs = new LinkedBlockingQueue<>();
{code}
- 1 millisecond ? is this intentional ?
{code}
// Don't run schedule if we have some pending backlogs already
if (cs.getAsyncSchedulingPendingBacklogs() > 100) {
  Thread.sleep(1);
{code}
- getReservedSchedulerKey vs getAllocatedSchedulerKey: looks like the same ?
- allocateContainersToNode: remove withNodeHeartbeat flag. Make the other 
caller call "allocateContainersToNode(PlacementSet ps" 
directly 
- "allocateContainersToNode(PlacementSet ps ":  maybe use 
lookAtSingleNode boolean to fork the code path. Make a separate method.

-  the first if condition is probably not needed.
{code}
if (null != allocated || null != reserved || (null != released && !released
.isEmpty())) {
  List>
  allocationsList = null;
  if (allocated != null) {
allocationsList = new ArrayList<>();
allocationsList.add(allocated);
  }

  List>
  reservationsList = null;
  if (reserved != null) {
reservationsList = new ArrayList<>();
reservationsList.add(reserved);
  }

  return new ResourceCommitRequest<>(allocationsList, reservationsList,
  released);
}
{code}

> Add global scheduler interface definition and update CapacityScheduler to use 
> it.
> -
>
> Key: YARN-5716
> URL: https://issues.apache.org/jira/browse/YARN-5716
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5716.001.patch, YARN-5716.002.patch
>
>
> Target of this JIRA:
> - Definition of interfaces / objects which will be used by global scheduling, 
> this will be shared by different schedulers.
> - Modify CapacityScheduler to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5715) introduce entity prefix for return and sort order

2016-10-14 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576474#comment-15576474
 ] 

Varun Saxena edited comment on YARN-5715 at 10/14/16 9:04 PM:
--

Changes related to TimelineReaderContext should be done in YARN-5585
Should a javadoc be added over setIdPrefix method to explain that it needs to 
be sent in each entity instead of having a normal code comment over the 
variable  ?
Documentation needs to be updated as well.



was (Author: varun_saxena):
Changes related to TimelineReaderContext should be done in YARN-5585
Should a javadoc be added over setIdPrefix method to explain that it needs to 
be sent in each entity ?
Documentation needs to be updated as well.


> introduce entity prefix for return and sort order
> -
>
> Key: YARN-5715
> URL: https://issues.apache.org/jira/browse/YARN-5715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5715-YARN-5355.01.patch, 
> YARN-5715-YARN-5355.02.patch, YARN-5715-YARN-5355.03.patch
>
>
> While looking into YARN-5585, we have come across the need to provide a sort 
> order different than the current entity id order. The current entity id order 
> returns entities strictly in the lexicographical order, and as such it 
> returns the earliest entities first. This may not be the most natural return 
> order. A more natural return/sort order would be from the most recent 
> entities.
> To solve this, we would like to add what we call the "entity prefix" in the 
> row key for the entity table. It is a number (long) that can be easily 
> provided by the client on write. In the row key, it would be added before the 
> entity id itself.
> The entity prefix would be considered mandatory. On all writes (including 
> updates) the correct entity prefix should be set by the client so that the 
> correct row key is used. The entity prefix needs to be unique only within the 
> scope of the application and the entity type.
> For queries that return a list of entities, the prefix values will be 
> returned along with the entity id's. Queries that specify the prefix and the 
> id should be returned quickly using the row key. If the query omits the 
> prefix but specifies the id (query by id), the query may be less efficient.
> This JIRA should add the entity prefix to the entity API and add its handling 
> to the schema and the write path. The read path will be addressed in 
> YARN-5585.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order

2016-10-14 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576474#comment-15576474
 ] 

Varun Saxena commented on YARN-5715:


Changes related to TimelineReaderContext should be done in YARN-5585
Should a javadoc be added over setIdPrefix method to explain that it needs to 
be sent in each entity ?
Documentation needs to be updated as well.


> introduce entity prefix for return and sort order
> -
>
> Key: YARN-5715
> URL: https://issues.apache.org/jira/browse/YARN-5715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5715-YARN-5355.01.patch, 
> YARN-5715-YARN-5355.02.patch, YARN-5715-YARN-5355.03.patch
>
>
> While looking into YARN-5585, we have come across the need to provide a sort 
> order different than the current entity id order. The current entity id order 
> returns entities strictly in the lexicographical order, and as such it 
> returns the earliest entities first. This may not be the most natural return 
> order. A more natural return/sort order would be from the most recent 
> entities.
> To solve this, we would like to add what we call the "entity prefix" in the 
> row key for the entity table. It is a number (long) that can be easily 
> provided by the client on write. In the row key, it would be added before the 
> entity id itself.
> The entity prefix would be considered mandatory. On all writes (including 
> updates) the correct entity prefix should be set by the client so that the 
> correct row key is used. The entity prefix needs to be unique only within the 
> scope of the application and the entity type.
> For queries that return a list of entities, the prefix values will be 
> returned along with the entity id's. Queries that specify the prefix and the 
> id should be returned quickly using the row key. If the query omits the 
> prefix but specifies the id (query by id), the query may be less efficient.
> This JIRA should add the entity prefix to the entity API and add its handling 
> to the schema and the write path. The read path will be addressed in 
> YARN-5585.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5704) Provide config knobs to control enabling/disabling new/work in progress features in container-executor

2016-10-14 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576471#comment-15576471
 ] 

Sidharta Seethana commented on YARN-5704:
-

Thanks, [~vvasudev]. 

> Provide config knobs to control enabling/disabling new/work in progress 
> features in container-executor
> --
>
> Key: YARN-5704
> URL: https://issues.apache.org/jira/browse/YARN-5704
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5704-branch-2.8.001.patch, YARN-5704.001.patch
>
>
> Provide a mechanism to enable/disable Docker and TC (Traffic Control) 
> functionality at the container-executor level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5717) Add tests for container-executor's is_feature_enabled function

2016-10-14 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576469#comment-15576469
 ] 

Sidharta Seethana commented on YARN-5717:
-

Thanks, [~chris.douglas]. 

> Add tests for container-executor's is_feature_enabled function
> --
>
> Key: YARN-5717
> URL: https://issues.apache.org/jira/browse/YARN-5717
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: yarn
>Affects Versions: 2.8.0, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5717.001.patch, YARN-5717.002.patch
>
>
> YARN-5704 added functionality to disable certain features in 
> container-executor. Most of the changes cannot be tested via 
> container-executor - however, is_feature_enabled can be tested if it were 
> made public. (It is currently static). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5729) Bug fixes identified during testing

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576422#comment-15576422
 ] 

Hadoop QA commented on YARN-5729:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
35s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 31s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 in yarn-native-services has 14 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api:
 The patch generated 1 new + 165 unchanged - 2 fixed = 166 total (was 167) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 7s 
{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 generated 0 new + 5 unchanged - 9 fixed = 5 total (was 14) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 18s 
{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 18s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833218/YARN-5729-yarn-native-services.002.patch
 |
| JIRA Issue | YARN-5729 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ef7cf0ffe548 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 1f40ba5 |
| Default Java | 1.8.0_101 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13399/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api.txt
 |
| findbugs | v3.0.0 |
| findbugs | 

[jira] [Comment Edited] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-14 Thread Sreenath Somarajapuram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576368#comment-15576368
 ] 

Sreenath Somarajapuram edited comment on YARN-5145 at 10/14/16 8:16 PM:


- Minor comment, Ember Logger could be used instead of direct console log.
- Looking at {code}if(address == "0.0.0.0") {{code}, just wondering if we could 
have condition where the addres comes as localhost or 127.0.0.1


was (Author: sreenath):
- Minor comment, Ember Logger could be used instead of direct console.
- Looking at {code}if(address == "0.0.0.0") {{code}, just wondering if we could 
have condition where the addres comes as localhost or 127.0.0.1

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-14 Thread Sreenath Somarajapuram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576368#comment-15576368
 ] 

Sreenath Somarajapuram commented on YARN-5145:
--

- Minor comment, Ember Logger could be used instead of direct console.
- Looking at {code}if(address == "0.0.0.0") {{code}, just wondering if we could 
have condition where the addres comes as localhost or 127.0.0.1

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-14 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576357#comment-15576357
 ] 

Varun Saxena commented on YARN-5561:


One question though. From where we will serve aggregated logs of historical 
apps from ? We do it currently from AHS.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576327#comment-15576327
 ] 

Hadoop QA commented on YARN-5611:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 32s 
{color} | {color:red} root: The patch generated 16 new + 313 unchanged - 1 
fixed = 329 total (was 314) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
53s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
2 new + 123 unchanged - 0 fixed = 125 total (was 123) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 8s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 30s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 111m 53s 
{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
42s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 217m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833374/0002-YARN-5611.patch |
| JIRA Issue | YARN-5611 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux f58032d576c7 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 

[jira] [Comment Edited] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-10-14 Thread Jo Desmet (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576247#comment-15576247
 ] 

Jo Desmet edited comment on YARN-5271 at 10/14/16 7:23 PM:
---

Note that such tight coupling should not be required, not so that we have that 
library dependency between spark and Yarn. The idea of Jersey is exactly to 
avoid this type of tight coupling between two application frameworks. I think 
it is still a huge drawback of not having the History Server up. I think it is 
1./ Spark's job to have enough decoupling in place; 2./ Yarn to evolve fast 
enough, being able to use latest libraries and benefit from added features and 
improvements. That being said, most likely this fix is probably a good first 
step.


was (Author: jo_des...@yahoo.com):
Note that such tight coupling should not be required, not so that we have that 
library dependency between spark and Yarn. The idea of Jersey is exactly to 
avoid this type of tight coupling between two application frameworks. I think 
it is still a huge drawback of not having the History Server up. I think it is 
1./ Spark's job to have enough decoupling in place; 2./ Yarn to evolve fast 
enough, being able to use latest libraries and benefit from added features and 
improvements.

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
> Attachments: YARN-5271.01.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-10-14 Thread Jo Desmet (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576247#comment-15576247
 ] 

Jo Desmet commented on YARN-5271:
-

Note that such tight coupling should not be required, not so that we have that 
library dependency between spark and Yarn. The idea of Jersey is exactly to 
avoid this type of tight coupling between two application frameworks. I think 
it is still a huge drawback of not having the History Server up. I think it is 
1./ Spark's job to have enough decoupling in place; 2./ Yarn to evolve fast 
enough, being able to use latest libraries and benefit from added features and 
improvements.

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
> Attachments: YARN-5271.01.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-14 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576235#comment-15576235
 ] 

Eric Payne commented on YARN-2009:
--

[~sunilg], I have one concern about how {{TempAppPerPartition#toBePreempted}} 
is calculated. Please consider the use case of where one low priority app is 
taking up all resources and a second high priority app is trying to get started:

_NOTE; I am ignoring {{userLimitResource}} and {{idealAssignedForUser}} because 
in this use case there is only 1 user. Ignoring {{app.selected}} as well._
{code:title=FifoIntraQueuePreemptionPlugin simplified pseudo code}
calculateIdealAssignedResourcePerApp:
  for each app {
app.idealAssigned = min((app.totalUsed + app.pending), queueTotalUnassigned)
  }

...

calculateToBePreemptedResourcePerApp:
  for each app {
app.toBePreempted = app.totalUsed - app.idealAssigned - app.AMUsed
  }
{code}

For this use case, let's consider that all containers, including the AMs, will 
be 512 MB. {{app1}} is running, consuming all resources in the queue. The RM 
creates {{app2}} and sets its pending value to be 512 MB (for the AM).

Intra-queue preemption does the following during each round:
||AppID||priority||totalUsed MB||AMUsed MB||pending MB||queueTotalUnassigned 
MB||
|2|2|0|0|512|12288|
{noformat:title=FifoIntraQueuePreemptionPlugin#calculateIdealAssignedResourcePerApp}
app2.idealAssigned = min((app2.totalUsed + app2.pending), queueTotalUnassigned)
   = min((0 + 512), 12288)
   = 512
queueTotalUnassigned = queueTotalUnassigned - app2.idealAssigned
 = 12288 - 512
 = 11776
{noformat}
||AppID||priority||totalUsed MB||AMUsed MB||pending MB||queueTotalUnassigned 
MB||
|1|1|12288|512|3584|11776|
{noformat:title=FifoIntraQueuePreemptionPlugin#calculateIdealAssignedResourcePerApp}
app1.idealAssigned = min((app1.totalUsed + app1.pending), queueTotalUnassigned)
   = min((12288 + 3584), 11776)
   = 11776
queueTotalUnassigned = queueTotalUnassigned - app2.idealAssigned
 = 11776 - 11776
 = 0
{noformat}
{noformat:title=FifoIntraQueuePreemptionPlugin#calculateToBePreemptedResourcePerApp}
app1.toBePreempted = app1.totalUsed - app1.idealAssigned - app1.AMUsed
   = 12288 - 11776 - 512
   = 0
{noformat}

In this case, {{app1.toBePreempted}} should be 512 instead of 0. Since it 
remains 0, no container ever gets preempted and {{app2}} can never get it's AM 
container.

I'm not yet ready to suggest a solution, but I wanted to point this out while 
we work on it.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5738) Allow services to release/kill specific containers

2016-10-14 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576150#comment-15576150
 ] 

Gour Saha edited comment on YARN-5738 at 10/14/16 6:43 PM:
---

I am thinking loud to see if the following can be achieved at the framework 
level -

1. Inclusion (Container Ids): Provided an inclusion list of container ids of an 
application, kill them all
2. Exclusion (Container Ids): Provided an exclusion list of container ids of an 
application, kill all except the ones in the list
3. Inclusion (Hosts): Provided an inclusion list of hosts and an application, 
kill all containers of this application running in all these hosts
4. Exclusion (Hosts): Provided an exclusion list of hosts and an application, 
kill all containers of this application running in all hosts except on the ones 
in the list



was (Author: gsaha):
I am thinking loud to see if the following can be achieved at the framework 
level -

1. Inclusion (Container Ids): Provided an inclusion list of container ids of an 
application, kill them all
2. Exclusion (Container Ids): Provided an exclusion list of container ids of an 
application, kill all except the ones in the list
3. Inclusion (Hosts): Provided an inclusion list of hosts and an application, 
kill all containers of this application running in all these hosts
4. Exclusion (Hosts): Provided an exclusion list of hosts and an application, 
kill all containers of this application running in all hosts except the ones in 
the list


> Allow services to release/kill specific containers
> --
>
> Key: YARN-5738
> URL: https://issues.apache.org/jira/browse/YARN-5738
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>
> There are occasions on which specific containers may not be required by a 
> service. Would be useful to have support to return these to YARN.
> Slider flex doesn't give this control.
> cc [~gsaha], [~vinodkv]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5738) Allow services to release/kill specific containers

2016-10-14 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576150#comment-15576150
 ] 

Gour Saha commented on YARN-5738:
-

I am thinking loud to see if the following can be achieved at the framework 
level -

1. Inclusion (Container Ids): Provided an inclusion list of container ids of an 
application, kill them all
2. Exclusion (Container Ids): Provided an exclusion list of container ids of an 
application, kill all except the ones in the list
3. Inclusion (Hosts): Provided an inclusion list of hosts and an application, 
kill all containers of this application running in all these hosts
4. Exclusion (Hosts): Provided an exclusion list of hosts and an application, 
kill all containers of this application running in all hosts except the ones in 
the list


> Allow services to release/kill specific containers
> --
>
> Key: YARN-5738
> URL: https://issues.apache.org/jira/browse/YARN-5738
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>
> There are occasions on which specific containers may not be required by a 
> service. Would be useful to have support to return these to YARN.
> Slider flex doesn't give this control.
> cc [~gsaha], [~vinodkv]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5701) Fix issues in yarn native services apps-of-apps

2016-10-14 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576040#comment-15576040
 ] 

Billie Rinaldi commented on YARN-5701:
--

Here are details on the bugs I am attempting to fix in this patch.

The changes in SliderClient are to use System.console() instead of a 
BufferedReader to prompt the user for passwords when the app requests passwords 
in the appConfig. When using the old slider python script that launched the 
SliderClient, System.console did not work, so we used BufferedReader and masked 
the passwords in the slider script. When running with the yarn script instead 
of slider, we need to use System.console() (as is done in CredentialShell) 
because the yarn script is not masking the passwords.

Another issue that came up when testing an assembly app that requests passwords 
is that the assembly is not inheriting the credentials request of the child / 
external apps. This is fixed in the InstanceBuilder.java changes.

In ProviderUtils.filterSiteOptions, the return map is being used as 
substitution tokens, so the keys should be in the format $\{key\}. This is 
fixing a problem where tokens are not being substituted properly.

In ProviderUtils.createConfigFile and ProviderUtils.localizeConfigFile, there 
is a concurrency problem in the creation of config files that occurs when 
multiple containers of the same type are being launched at once. This part of 
the patch fixes those.

The new methods in ProviderUtils, getHostNamesList and getIPsList, are used to 
populate substitution tokens COMPONENTNAME_HOSTNAME and COMPONENTNAME_IP. These 
are needed in addition to COMPONENTNAME_HOST token which provides the name of 
the host where a container is running. In the docker container case, the 
HOSTNAME substitution will be the docker hostname of the container (we may want 
to change this later to be the DNS hostname in the case when RegistryDNS is 
enabled). This patch also adds handling for the nested assembly case when 
role.prefix is populated.

DockerProviderService line 374 fixes a typo; it should have been using 
replaceAll instead of replace.

DockerProviderService.buildMonitorDetails and 
DockerProviderService.buildRoleHostDetails are needed to populate information 
on the AM UI page. These methods are copied from the original 
AgentProviderService.

The .keep file needs to be added to make the Slider AM web service work. It is 
hard to add this file. You have to run "git add -f 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core/src/main/resources/webapps/slideram/.keep"
 before doing the git commit.

Lastly, clusterName has been added as a parameter to a couple of methods where 
it was missing. CLUSTER_NAME was not being substituted properly in exports 
because the parameter was not being passed.

> Fix issues in yarn native services apps-of-apps
> ---
>
> Key: YARN-5701
> URL: https://issues.apache.org/jira/browse/YARN-5701
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Attachments: YARN-5701-yarn-native-services-001.patch, 
> YARN-5701-yarn-native-services.001.patch
>
>
> In testing out complex apps-of-apps with the docker provider, I found a few 
> issues primarily related to exports, as well as a couple of issues in config 
> generation and CredentialProvider handling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5733) Run PerNodeTimelineCollectorsAuxService in a system container

2016-10-14 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576019#comment-15576019
 ] 

Haibo Chen commented on YARN-5733:
--

YARN-5732 is mostly a duplicate of YARN-1593 with almost the same goal, and 
Junping said he is actively working on it, so I guess we will be tracking 
YARN-1593 instead.

> Run PerNodeTimelineCollectorsAuxService in a system container
> -
>
> Key: YARN-5733
> URL: https://issues.apache.org/jira/browse/YARN-5733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> This is mostly for tracking yarn-5732 (Run auxiliary services in system 
> containers) because under current implementation, TimelineCollectManager is 
> implemented as an auxiliary service. We'd expect yarn-5732 to be transparent 
> to all auxiliary services, therefore, there is minimal work in here other 
> than checking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575994#comment-15575994
 ] 

Hadoop QA commented on YARN-5271:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 19s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 2s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833410/YARN-5271.01.patch |
| JIRA Issue | YARN-5271 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b96d6404961c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8a9f663 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13398/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13398/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
> Attachments: YARN-5271.01.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the 

[jira] [Commented] (YARN-5736) YARN container executor config does not handle white space

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575901#comment-15575901
 ] 

Hadoop QA commented on YARN-5736:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 23s {color} | 
{color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 51s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833407/YARN_5736.000.patch |
| JIRA Issue | YARN-5736 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux b154d0cdeadf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8a9f663 |
| Default Java | 1.8.0_101 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13397/artifact/patchprocess/diff-compile-cc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13397/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13397/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13397/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> YARN container executor config does not handle white space
> --
>
> Key: YARN-5736
> URL: https://issues.apache.org/jira/browse/YARN-5736
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
> Attachments: YARN_5736.000.patch
>
>
> The container executor configuration reader does not handle white spaces or 
> malformed key value pairs in the config file correctly or gracefully
> as an example the following key value line which is part of the 

[jira] [Updated] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-10-14 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-5271:
--
Attachment: YARN-5271.01.patch

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
> Attachments: YARN-5271.01.patch, YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-10-14 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-5271:
--
Attachment: YARN-5271.branch-2.01.patch

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
> Attachments: YARN-5271.branch-2.01.patch
>
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5736) YARN container executor config does not handle white space

2016-10-14 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5736:
-
Attachment: YARN_5736.000.patch

I added a trim() function on both configuration keys and values. It removes all 
characters that satisfy isspace() from the beginning and the end of the 
strings. I also added error name 'unknown', if errno is not set to a valid 
error.

Tested manually.

> YARN container executor config does not handle white space
> --
>
> Key: YARN-5736
> URL: https://issues.apache.org/jira/browse/YARN-5736
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
> Attachments: YARN_5736.000.patch
>
>
> The container executor configuration reader does not handle white spaces or 
> malformed key value pairs in the config file correctly or gracefully
> as an example the following key value line which is part of the configuration 
> (note the << is used as a marker to show the extra trailing space):
> yarn.nodemanager.linux-container-executor.group=yarn <<
> is a valid line but when you run the check over the file:
> [root@test]#./container-executor --checksetup
> Can't get group information for yarn - Success.
> [root@test]#
> It fails to find the yarn group but it really tries to find the "yarn " group 
> which fails. There is no trimming anywhere while processing the lines. If a 
> space would be added in before or after the = sign a failure would also occur.
> Minor nit is the fact that a failure still is logged as a Success



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-14 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575746#comment-15575746
 ] 

Rohith Sharma K S commented on YARN-5699:
-

Test failures are unrelated to patch. YARN-5377 and YARN-5737 are tracking JIRA 
for test failures.

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch, 0003-YARN-5699.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575722#comment-15575722
 ] 

Hadoop QA commented on YARN-5699:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: 
The patch generated 0 new + 103 unchanged - 11 fixed = 103 total (was 114) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 59s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 45s {color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 36s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
|   | hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833383/0003-YARN-5699.patch |
| JIRA Issue | YARN-5699 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 748573d8db0b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575664#comment-15575664
 ] 

Hadoop QA commented on YARN-5145:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 4m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 5m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b17 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833395/YARN-5145-YARN-3368.04.patch
 |
| JIRA Issue | YARN-5145 |
| Optional Tests |  asflicense  |
| uname | Linux dfd3f9d21dfa 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 164a3c2 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13395/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-14 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5145:
--
Attachment: YARN-5145-YARN-3368.04.patch

Thanks [~leftnoteasy]. Updating patch after addressing the comments.

cc/[~Sreenath]

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-14 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575569#comment-15575569
 ] 

Rohith Sharma K S commented on YARN-5561:
-

During weekly sync up, we had discussion regarding for providing new REST URL 
i.e */ws/v2/applicationhistory*. One of the point discussed is what is the 
advantage is having separate REST end points which returns is *Report format? 
It will become a another version existing data with different format. But ATSv2 
TimelineEntity is super set of all where in user might required information 
which are not in *Report. So, if required information are published under first 
class entity info level like YARN-5699 doing, then it is more than sufficient. 

The consensus are
# we publish the yarn entities important information at first class  entity 
info level so that user can make use of these information to query using 
infofilters.
# We will be going ahead with adding new REST end points(patch already attached 
i.e YARN-5561.03.patch) which are enhancement of existing getEntities. 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1557#comment-1557
 ] 

Hadoop QA commented on YARN-5552:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 108 unchanged - 8 fixed = 108 total (was 116) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 35s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 14s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 12s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833366/YARN-5552.005.patch |
| JIRA Issue | YARN-5552 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1e64c4e288c1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbe663d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 

[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-14 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575536#comment-15575536
 ] 

Rohith Sharma K S commented on YARN-5699:
-

Updated the patch fixing review comment and checkstyle. 

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch, 0003-YARN-5699.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-14 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575534#comment-15575534
 ] 

Rohith Sharma K S commented on YARN-5699:
-

Basically TestApplicationHistoryManagerOnTimelineStore runs on ATSv1. They 
publish the entities to store and read it back where they verify each fields 
for validating published value is read. In ATSv2, there is such framework 
exists as of now. I have raised a JIRA YARN-5737 to validate each fields that 
are published in the form of report. 

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch, 0003-YARN-5699.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-14 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5699:

Attachment: 0003-YARN-5699.patch

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch, 0003-YARN-5699.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5611) Provide an API to update lifetime of an application.

2016-10-14 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5611:

Attachment: 0002-YARN-5611.patch

Attached the patch. Brief about the patch are
# ApplicationClientProtocol is added with new method i.e 
updateApplicationTimeouts method.
# UpdateApplicationTimeoutsRequest takes appId and Map to update timeout values. And these timeout values are gets added to 
existing timeouts out. If application was not monitored, then this application 
start monitoring from now. 
# New API updateApplicationTimeouts added in RMAppLifetimeMonitor class. This 
is basically updates timeout for given ApplicationTimeoutTypes.
#   New field appIdToTimeoutTypeMapping is added in RMAppLifetimeMonitor. This 
is to track each appIdToApplicationTimeoutTypes mapping where can be used as 
cache.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-10-14 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575292#comment-15575292
 ] 

Tao Jie commented on YARN-5552:
---

Updated the patch according to [~leftnoteasy]'s suggestion.
[~asuresh], [~kasha], [~leftnoteasy] would you mind reviewing the latest patch 
again?

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5552) Add Builder methods for common yarn API records

2016-10-14 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-5552:
--
Attachment: YARN-5552.005.patch

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575071#comment-15575071
 ] 

Hadoop QA commented on YARN-5552:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 116 unchanged - 8 fixed = 117 total (was 124) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 35s {color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 37s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 2s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.api.TestPBImplRecords |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.TestSchedulingWithAllocationRequestId
 |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps |
|   | hadoop.yarn.client.TestYarnApiClasses |
|   | hadoop.yarn.client.api.impl.TestAMRMClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1286/YARN-5552.004.patch |
| JIRA Issue | YARN-5552 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Updated] (YARN-5552) Add Builder methods for common yarn API records

2016-10-14 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated YARN-5552:
--
Attachment: YARN-5552.004.patch

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-10-14 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574709#comment-15574709
 ] 

Weiwei Yang commented on YARN-5271:
---

I am assigning this one to myself now to investigate.

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5271) ATS client doesn't work with Jersey 2 on the classpath

2016-10-14 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang reassigned YARN-5271:
-

Assignee: Weiwei Yang

> ATS client doesn't work with Jersey 2 on the classpath
> --
>
> Key: YARN-5271
> URL: https://issues.apache.org/jira/browse/YARN-5271
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, timelineserver
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Weiwei Yang
>
> see SPARK-15343 : once Jersey 2 is on the CP, you can't instantiate a 
> timeline client, *even if the server is an ATS1.5 server and publishing is 
> via the FS*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1593) support out-of-proc AuxiliaryServices

2016-10-14 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574396#comment-15574396
 ] 

Junping Du commented on YARN-1593:
--

Thanks [~haibochen] for checking me on this. Actually, [~vvasudev] and I are 
working on design for this issue. We will put up a design doc for review in 
next one day or 2. It would be great if you can help to review and comments 
afterwards.

> support out-of-proc AuxiliaryServices
> -
>
> Key: YARN-1593
> URL: https://issues.apache.org/jira/browse/YARN-1593
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, rolling upgrade
>Reporter: Ming Ma
>Assignee: Junping Du
>
> AuxiliaryServices such as ShuffleHandler currently run in the same process as 
> NM. There are some benefits to host them in dedicated processes.
> 1. NM rolling restart. If we want to upgrade YARN , NM restart will force the 
> ShuffleHandler restart. If ShuffleHandler runs as a separate process, 
> ShuffleHandler can continue to run during NM restart. NM can reconnect the 
> the running ShuffleHandler after restart.
> 2. Resource management. It is possible another type of AuxiliaryServices will 
> be implemented. AuxiliaryServices are considered YARN application specific 
> and could consume lots of resources. Running AuxiliaryServices in separate 
> processes allow easier resource management. NM could potentially stop a 
> specific AuxiliaryServices process from running if it consumes resource way 
> above its allocation.
> Here are some high level ideas:
> 1. NM provides a hosting process for each AuxiliaryService. Existing 
> AuxiliaryService API doesn't change.
> 2. The hosting process provides RPC server for AuxiliaryService proxy object 
> inside NM to connect to.
> 3. When we rolling restart NM, the existing AuxiliaryService processes will 
> continue to run. NM could reconnect to the running AuxiliaryService processes 
> upon restart.
> 4. Policy and resource management of AuxiliaryServices. So far we don't have 
> immediate need for this. AuxiliaryService could run inside a container and 
> its resource utilization could be taken into account by RM and RM could 
> consider a specific type of applications overutilize cluster resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5728) TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization timeout

2016-10-14 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-5728:

Assignee: (was: Akira Ajisaka)

> TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization timeout
> 
>
> Key: YARN-5728
> URL: https://issues.apache.org/jira/browse/YARN-5728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
> Attachments: YARN-5728.01.patch
>
>
> TestMiniYARNClusterNodeUtilization.testUpdateNodeUtilization is failing by 
> timeout.
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/192/testReport/junit/org.apache.hadoop.yarn.server/TestMiniYarnClusterNodeUtilization/testUpdateNodeUtilization/
> {noformat}
> java.lang.Exception: test timed out after 6 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
>   at com.sun.proxy.$Proxy85.nodeHeartbeat(Unknown Source)
>   at 
> org.apache.hadoop.yarn.server.TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization(TestMiniYarnClusterNodeUtilization.java:113)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5735) Make the service REST API use the app timeout feature YARN-4205

2016-10-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574371#comment-15574371
 ] 

Hadoop QA commented on YARN-5735:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
35s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-yarn-services-api in yarn-native-services failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core
 in yarn-native-services has 317 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api
 in yarn-native-services has 14 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-yarn-slider-core in yarn-native-services failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications: 
The patch generated 4 new + 443 unchanged - 1 fixed = 447 total (was 444) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 8s 
{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 38s {color} 
| {color:red} hadoop-yarn-slider-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} The patch generated 10 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 7s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
slider.core.registry.docstore.TestPublishedConfigurationOutputter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833303/YARN-5735-yarn-native-services.002.patch
 |
| JIRA Issue | YARN-5735 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (YARN-5699) Retrospect yarn entity fields which are publishing in events info fields.

2016-10-14 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574347#comment-15574347
 ] 

Sangjin Lee commented on YARN-5699:
---

OK it's more of curiosity on my part as what this test does is obviously 
different than what we are doing in this patch. I am fine with opening a 
different JIRA to look into this test.

> Retrospect yarn entity fields which are publishing in events info fields.
> -
>
> Key: YARN-5699
> URL: https://issues.apache.org/jira/browse/YARN-5699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5699.YARN-5355.patch, 0001-YARN-5699.patch, 
> 0002-YARN-5699.YARN-5355.patch, 0002-YARN-5699.patch
>
>
> Currently, all the container information are published at 2 places. Some of 
> them are at entity info(top-level) and some are  at event info. 
> For containers, some of the event info should be published at container info 
> level. For example : container exist status, container state, createdTime, 
> finished time. These are general information to container required for 
> container-report. So it is better to publish at top level info field. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org