[jira] [Updated] (YARN-4287) Capacity Scheduler: Rack Locality improvement

2015-11-10 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated YARN-4287:
-
Attachment: YARN-4287-minimal-v4.patch

Thanks [~leftnoteasy] for the comments. Made the following changes:
- Added comments about capping off_switch delay to number of nodes in cluster
- Added test case to verify we continue to allocate RACK_LOCAL if full_reset is 
false.
- Added test case to verify we do reset schedulingOpportunities when full_reset 
is true (today's behavior)
- Added test case to verify we cap OFF_SWITCH delay to number of nodes in 
cluster

> Capacity Scheduler: Rack Locality improvement
> -
>
> Key: YARN-4287
> URL: https://issues.apache.org/jira/browse/YARN-4287
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Affects Versions: 2.7.1
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: YARN-4287-minimal-v2.patch, YARN-4287-minimal-v3.patch, 
> YARN-4287-minimal-v4.patch, YARN-4287-minimal.patch, YARN-4287-v2.patch, 
> YARN-4287-v3.patch, YARN-4287-v4.patch, YARN-4287.patch
>
>
> YARN-4189 does an excellent job describing the issues with the current delay 
> scheduling algorithms within the capacity scheduler. The design proposal also 
> seems like a good direction.
> This jira proposes a simple interim solution to the key issue we've been 
> experiencing on a regular basis:
>  - rackLocal assignments trickle out due to nodeLocalityDelay. This can have 
> significant impact on things like CombineFileInputFormat which targets very 
> specific nodes in its split calculations.
> I'm not sure when YARN-4189 will become reality so I thought a simple interim 
> patch might make sense. The basic idea is simple: 
> 1) Separate delays for rackLocal, and OffSwitch (today there is only 1)
> 2) When we're getting rackLocal assignments, subsequent rackLocal assignments 
> should not be delayed
> Patch will be uploaded shortly. No big deal if the consensus is to go 
> straight to YARN-4189. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4234) New put APIs in TimelineClient for ats v1.5

2015-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999157#comment-14999157
 ] 

Hadoop QA commented on YARN-4234:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 17s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s 
{color} | {color:red} Patch generated 30 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn (total was 235, now 264). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 53s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 8s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-10 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-4308) ContainersAggregated CPU resource utilization reports negative usage in first few heartbeats

2015-11-10 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998921#comment-14998921
 ] 

Sunil G commented on YARN-4308:
---

Hi [~djp]
Could you please help to see the above case. I can also handle this case from 
the user end. We could just reset res-usage for CPU as *n/a* or *0* before 
sending any resource usage to client/UI. But still for first time, I feel that 
we can return 0 rather that negative value. In other cases, sending a -ve value 
is fine (provided we need not do any more modification or calculation based on 
that negative value).
For eg: calculation of {{milliVcoresUsed}} in {{ContainerMonitorImpl}}

> ContainersAggregated CPU resource utilization reports negative usage in first 
> few heartbeats
> 
>
> Key: YARN-4308
> URL: https://issues.apache.org/jira/browse/YARN-4308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4308.patch
>
>
> NodeManager reports ContainerAggregated CPU resource utilization as -ve value 
> in first few heartbeats cycles. I added a new debug print and received below 
> values from heartbeats.
> {noformat}
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  ContainersResource Utilization : CpuTrackerUsagePercent : -1.0 
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:ContainersResource
>  Utilization :  CpuTrackerUsagePercent : 198.94598
> {noformat}
> Its better we send 0 as CPU usage rather than sending a negative values in 
> heartbeats eventhough its happening in only first few heartbeats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4234) New put APIs in TimelineClient for ats v1.5

2015-11-10 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4234:

Attachment: YARN-4234.20151110.1.patch

Fix findbugs warning

> New put APIs in TimelineClient for ats v1.5
> ---
>
> Key: YARN-4234
> URL: https://issues.apache.org/jira/browse/YARN-4234
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4234.1.patch, YARN-4234.2.patch, 
> YARN-4234.20151109.patch, YARN-4234.20151110.1.patch, YARN-4234.3.patch
>
>
> In this ticket, we will add new put APIs in timelineClient to let 
> clients/applications have the option to use ATS v1.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4241) Typo in yarn-default.xml

2015-11-10 Thread Anthony Rojas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Rojas updated YARN-4241:

Attachment: YARN-4241.003.patch

Thank you [~ozawa], resubmitting patch rebased on trunk code.

> Typo in yarn-default.xml
> 
>
> Key: YARN-4241
> URL: https://issues.apache.org/jira/browse/YARN-4241
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, yarn
>Reporter: Anthony Rojas
>Assignee: Anthony Rojas
>Priority: Trivial
>  Labels: newbie
> Attachments: YARN-4241.002.patch, YARN-4241.003.patch, 
> YARN-4241.patch, YARN-4241.patch.1
>
>
> Typo in description section of yarn-default.xml, under the properties:
> yarn.nodemanager.disk-health-checker.min-healthy-disks
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
> yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
> yarn.nodemanager.disk-health-checker.disk-utilization-watermark-low-per-disk-percentage
> The reference to yarn-nodemanager.local-dirs should be 
> yarn.nodemanager.local-dirs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1510) Make NMClient support change container resources

2015-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999405#comment-14999405
 ] 

Hudson commented on YARN-1510:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8787 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8787/])
YARN-1510. Make NMClient support change container resources. (Meng Ding 
(wangda: rev c99925d6dd0235f0d27536f0bebd129e435688fb)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java


> Make NMClient support change container resources
> 
>
> Key: YARN-1510
> URL: https://issues.apache.org/jira/browse/YARN-1510
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Wangda Tan (No longer used)
>Assignee: MENG DING
> Attachments: YARN-1510-YARN-1197.1.patch, 
> YARN-1510-YARN-1197.2.patch, YARN-1510.3.patch, YARN-1510.4.patch, 
> YARN-1510.5.patch, YARN-1510.6.patch, YARN-1510.7.patch
>
>
> As described in YARN-1197, YARN-1449, we need add API in NMClient to support
> 1) sending request of increase/decrease container resource limits
> 2) get succeeded/failed changed containers response from NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4287) Capacity Scheduler: Rack Locality improvement

2015-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999391#comment-14999391
 ] 

Hadoop QA commented on YARN-4287:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} Patch generated 4 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 (total was 198, now 202). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 39s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 56s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
37s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 145m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestClientRMService |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-10 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771599/YARN-4287-minimal-v4.patch
 |
| JIRA Issue | YARN-4287 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  

[jira] [Commented] (YARN-1509) Make AMRMClient support send increase container request and get increased/decreased containers

2015-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999410#comment-14999410
 ] 

Hadoop QA commented on YARN-1509:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-applications-distributedshell in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-yarn-project/hadoop-yarn (total was 124, now 120). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 3s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 35s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 2s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 37s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 129m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.yarn.client.TestGetGroups |
| JDK v1.8.0_60 Timed out junit tests | 
org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestNMClient |
| JDK v1.7.0_79 Failed junit tests | hadoop.yarn.client.TestGetGroups |
| JDK v1.7.0_79 Timed out junit tests | 

[jira] [Commented] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999583#comment-14999583
 ] 

Xuan Gong commented on YARN-4183:
-

Sorry for the late response.

Here is my understanding:
* The current problem is that all the jobs are enforced to get ATS DT even if 
those jobs do not want to connect in future.  
* The value for yarn.timeline-service.enabled only means whether we have ATS 
daemon or not. We should not use this configuration to decide whether the job 
needs to get the ATS DT. 
* I think that the part of the reason, why we marked this configuration 
"yarn.timeline-service.generic-application-history.enabled" as private instead 
of deleting it, is for the compatibility.

[~jeagles] I agree with all of your comments. But I think the concerns from 
[~Naganarasimha], especially  compatibility part, makes sense.

bq. If the main issue is for creation of delegation tokens i would rather 
prefer to have some option in the clients to determine whether to create create 
ATS delegations tokens or not. Thoughts?

It might be better if we could have options for the applications to choose 
whether they need ATS DT or not. 

> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999670#comment-14999670
 ] 

Vinod Kumar Vavilapalli commented on YARN-4183:
---

Getting the obvious out of the way: It's a mess.

h4. How things worked before this JIRA
 - RM uses {{generic-application-history.enabled}} to activate 
RMApplicationHistoryWriter (RM sending app events to the 
now-dead-but-kept-for-compat APPLICATION_HISTORY_STORE)
 - RM uses {{yarn.timeline-service.enabled}} + 
{{yarn.resourcemanager.system-metrics-publisher.enabled}} to write 
app/app-attempt/container events to Timeline Service
 - YarnClient uses {{generic-application-history.enabled}} to talk to the 
_history_ server irrespective of where the historic data gets stored
 - TimelineClient (embedded inside YarnClient) uses 
{{yarn.timeline-service.enabled}} to get tokens and populate during 
app-submission.

h4. Quick general context
 - Nobody is expected to use RMApplicationHistoryWriter
 - {{yarn.timeline-service.generic-application-history.enabled}} is also 
supposed to be dead for all purposes. But it is today used beyond the RM using 
it to activate RMApplicationHistoryWriter
 - SystemMetricsPublisher only writes events to TimelineService (v1, v1.5)

Given the above, I can't but conclude that the existing configuration is not 
modeled correctly.

h4. The right thing to do
 - Make SystemMetricsPublisher only respect 
{{yarn.resourcemanager.system-metrics-publisher.enabled}}
 - Leave {{yarn.timeline-service.generic-application-history.enabled}} as a 
dead property only to activate RMApplicationHistoryWriter.
 - We can leave {{yarn.timeline-service.generic-application-history.enabled}} 
to also activate client -> RM for historical data or make RM always proxy these 
calls for the client
 - There should be an explicit {{yarn.timeline-service.version}} which tells 
YarnClient to get tokens or not - yes for non-present version (default), v1, v2 
but no for v1.5.
 - We should also use the same property in the new API calls proposed for V1.5 
YARN-4233 / V2 YARN-2928, lest the users think they can call any API 
independent of what is supported on server side. The version field has 
semantics on both client *and* server side at the same time - it's picking a 
solution end-to-end.

h4. Immediate step
All of this needs more work, so unless I hear strongly otherwise I am going to 
revert this patch in the interest of 2.7.2's progress.
 
/cc [~hitesh] [~gtCarrera9] [~xgong] [~sjlee0]

> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened YARN-4183:
---

I was asked to stall 2.7.2 for this JIRA. Reopening it while we discuss this 
more.

> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999562#comment-14999562
 ] 

Naganarasimha G R commented on YARN-4183:
-

Oops, Missed this comment [~sjlee0],

Documentation gives a fair idea and its same as per my understanding, 
*yarn.timeline-service.enabled* : Indicate to clients whether timeline service 
is enabled or not. If enabled, clients will put entities and events to the 
timeline server.

*yarn.timeline-service.generic-application-history.enabled* : Indicate to 
clients whether to query generic application data from timeline history-service 
or not. If not enabled then application data is queried only from Resource 
Manager. Defaults to false. (which is currently not there in documentation but 
present in TimelineServer.MD)

*yarn.resourcemanager.system-metrics-publisher.enabled* : The setting that 
controls whether yarn system metrics is published on the timeline server or not 
by RM. (This requires yarn.timeline-service.enabled to be enabled which 
requires a doc update).

AHS on timelinestore is started if started if 
"YarnConfiguration.APPLICATION_HISTORY_STORE" is not configured.


> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4343) Need to support Application History Server on ATSV2

2015-11-10 Thread Naganarasimha G R (JIRA)
Naganarasimha G R created YARN-4343:
---

 Summary: Need to support Application History Server on ATSV2
 Key: YARN-4343
 URL: https://issues.apache.org/jira/browse/YARN-4343
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R


AHS is used by the CLI and Webproxy(REST), if the application related 
information is not found in RM then it tries to fetch from AHS and show



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999614#comment-14999614
 ] 

Naganarasimha G R commented on YARN-4183:
-

Hi [~xgong],
bq. The value for yarn.timeline-service.enabled only means whether we have ATS 
daemon or not. We should not use this configuration to decide whether the job 
needs to get the ATS DT.
Just went through the references of all  {{yarn.timeline-service.enabled}}  
configuration, and one thing i could understand was, its not used to indicate 
ATS daemon is started but kind of looks like client wants to use ATS daemon or 
not. and matches with the description in the document *"Indicate to clients 
whether timeline service is enabled or not. If enabled, clients will put 
entities and events to the timeline server."*
Also if the *timelineserver* daemon is started it directly starts  the 
timelinestore without checking for the configuration 
*"yarn.timeline-service.enabled"*
So its as good as have this configuration if the client wants to put timeline 
entities else disable.

> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1510) Make NMClient support change container resources

2015-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999629#comment-14999629
 ] 

Hudson commented on YARN-1510:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1388 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1388/])
YARN-1510. Make NMClient support change container resources. (Meng Ding 
(wangda: rev c99925d6dd0235f0d27536f0bebd129e435688fb)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java


> Make NMClient support change container resources
> 
>
> Key: YARN-1510
> URL: https://issues.apache.org/jira/browse/YARN-1510
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Wangda Tan (No longer used)
>Assignee: MENG DING
> Fix For: 2.8.0
>
> Attachments: YARN-1510-YARN-1197.1.patch, 
> YARN-1510-YARN-1197.2.patch, YARN-1510.3.patch, YARN-1510.4.patch, 
> YARN-1510.5.patch, YARN-1510.6.patch, YARN-1510.7.patch
>
>
> As described in YARN-1197, YARN-1449, we need add API in NMClient to support
> 1) sending request of increase/decrease container resource limits
> 2) get succeeded/failed changed containers response from NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999660#comment-14999660
 ] 

Sangjin Lee commented on YARN-4053:
---

To make progress with this ticket, if you're in line with what Vrushali said 
above, we can focus on implementing the correct long support in this ticket. We 
don't have to worry about the other dimensions (whether to aggregate, or 
single-value v. time series) in here.

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1510) Make NMClient support change container resources

2015-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999567#comment-14999567
 ] 

Hudson commented on YARN-1510:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #664 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/664/])
YARN-1510. Make NMClient support change container resources. (Meng Ding 
(wangda: rev c99925d6dd0235f0d27536f0bebd129e435688fb)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java


> Make NMClient support change container resources
> 
>
> Key: YARN-1510
> URL: https://issues.apache.org/jira/browse/YARN-1510
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Wangda Tan (No longer used)
>Assignee: MENG DING
> Fix For: 2.8.0
>
> Attachments: YARN-1510-YARN-1197.1.patch, 
> YARN-1510-YARN-1197.2.patch, YARN-1510.3.patch, YARN-1510.4.patch, 
> YARN-1510.5.patch, YARN-1510.6.patch, YARN-1510.7.patch
>
>
> As described in YARN-1197, YARN-1449, we need add API in NMClient to support
> 1) sending request of increase/decrease container resource limits
> 2) get succeeded/failed changed containers response from NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4184) Remove update reservation state api from state store as its not used by ReservationSystem

2015-11-10 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999634#comment-14999634
 ] 

Sean Po commented on YARN-4184:
---

TestAMAuthorization test failure is not caused by my changes.

I am also fairly certain that TestClientRMTokens is not caused by my changes.

The failure in TestFSRMStateStore  had the following error message: "Timed out 
waiting for Mini HDFS Cluster to start". This does not seem related to my 
changes. Furthermore, I ran all tests extending RMStateStoreTestBase locally 
with the patch applied, and all passes. 

> Remove update reservation state api from state store as its not used by 
> ReservationSystem
> -
>
> Key: YARN-4184
> URL: https://issues.apache.org/jira/browse/YARN-4184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Anubhav Dhoot
>Assignee: Sean Po
> Attachments: YARN-4184.v1.patch
>
>
> ReservationSystem uses remove/add for updates and thus update api in state 
> store is not needed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4330) MiniYARNCluster prints multiple Failed to instantiate default resource calculator warning messages

2015-11-10 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998326#comment-14998326
 ] 

Varun Saxena commented on YARN-4330:


TestContainerManagerSecurity is failing even without this patch.
Checkstyle related(longer than 80 lines).


For findbugs, there is YARN-4298

> MiniYARNCluster prints multiple  Failed to instantiate default resource 
> calculator warning messages
> ---
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
> Attachments: YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4342) TestContainerManagerSecurity failing on trunk

2015-11-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4342:
---
Description: 
{noformat}
Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 277.949 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  Time elapsed: 140.735 sec  <<< ERROR!
java.lang.Exception: test timed out after 12 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)

testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  Time elapsed: 136.317 sec  <<< ERROR!
java.lang.Exception: test timed out after 12 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
{noformat}

Also refer to 
https://builds.apache.org/job/PreCommit-YARN-Build/9635/testReport/

> TestContainerManagerSecurity failing on trunk
> -
>
> Key: YARN-4342
> URL: https://issues.apache.org/jira/browse/YARN-4342
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>
> {noformat}
> Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 277.949 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 140.735 sec  <<< ERROR!
> java.lang.Exception: test timed out after 12 milliseconds
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
> at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 136.317 sec  <<< ERROR!
> java.lang.Exception: test timed out after 12 milliseconds
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
> at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
> at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
> {noformat}
> Also refer to 
> https://builds.apache.org/job/PreCommit-YARN-Build/9635/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4342) TestContainerManagerSecurity failing on trunk

2015-11-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4342:
---
Description: 
{noformat}
Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 277.949 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  Time elapsed: 140.735 sec  <<< ERROR!
java.lang.Exception: test timed out after 12 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)

testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  Time elapsed: 136.317 sec  <<< ERROR!
java.lang.Exception: test timed out after 12 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
{noformat}


  was:
{noformat}
Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 277.949 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
testContainerManager[0](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  Time elapsed: 140.735 sec  <<< ERROR!
java.lang.Exception: test timed out after 12 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)

testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  Time elapsed: 136.317 sec  <<< ERROR!
java.lang.Exception: test timed out after 12 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:158)
at com.sun.proxy.$Proxy88.startContainers(Unknown Source)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:556)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:477)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:249)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:157)
{noformat}

Also refer to 
https://builds.apache.org/job/PreCommit-YARN-Build/9635/testReport/


> TestContainerManagerSecurity failing on trunk
> -
>
> Key: YARN-4342
> URL: https://issues.apache.org/jira/browse/YARN-4342
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>
> {noformat}
> Running org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 277.949 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> 

[jira] [Created] (YARN-4342) TestContainerManagerSecurity failing on trunk

2015-11-10 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-4342:
--

 Summary: TestContainerManagerSecurity failing on trunk
 Key: YARN-4342
 URL: https://issues.apache.org/jira/browse/YARN-4342
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999780#comment-14999780
 ] 

Varun Saxena commented on YARN-4053:


Ok

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999817#comment-14999817
 ] 

Li Lu commented on YARN-4183:
-

bq. Already Subtask YARN-3623 is raised for this, hope i can work on this ?
Let's fix the config problem raised in YARN-3623 here since it's no longer a 
ATS v2 problem. Please feel free to open a new JIRA for the API fix in 
YARN-2928. If you happen to have cycles feel free to assign it to you. 
[~sjlee0] any suggestions here? Thanks! 

> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4053:
---
Attachment: (was: YARN-4053-YARN-2928.03.patch)

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4053:
---
Attachment: YARN-4053-YARN-2928.03.patch

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch, YARN-4053-YARN-2928.03.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4234) New put APIs in TimelineClient for ats v1.5

2015-11-10 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999673#comment-14999673
 ] 

Vinod Kumar Vavilapalli commented on YARN-4234:
---

Please also look at 
https://issues.apache.org/jira/browse/YARN-4183?focusedCommentId=14999670=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14999670
 for more thoughts on the API.

> New put APIs in TimelineClient for ats v1.5
> ---
>
> Key: YARN-4234
> URL: https://issues.apache.org/jira/browse/YARN-4234
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4234.1.patch, YARN-4234.2.patch, 
> YARN-4234.20151109.patch, YARN-4234.20151110.1.patch, YARN-4234.3.patch
>
>
> In this ticket, we will add new put APIs in timelineClient to let 
> clients/applications have the option to use ATS v1.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999822#comment-14999822
 ] 

Vrushali C commented on YARN-4053:
--

bq. Vrushali, thanks for your comments. I would like to work on this. Let me 
take a stab on this one. Will have the bandwidth. I hope its fine. You can help 
me with the reviews.

Sounds good, let me go through the discussion points you have mentioned and get 
back on this.

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999841#comment-14999841
 ] 

Varun Saxena commented on YARN-4053:


Moreover, whether to aggregate or not, the proposal is to not have it in a 
column qualifier. So nothing to do here for that. YARN-3816 will have to remove 
code corresponding to it.

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch, YARN-4053-YARN-2928.03.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1510) Make NMClient support change container resources

2015-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999747#comment-14999747
 ] 

Hudson commented on YARN-1510:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #653 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/653/])
YARN-1510. Make NMClient support change container resources. (Meng Ding 
(wangda: rev c99925d6dd0235f0d27536f0bebd129e435688fb)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* hadoop-yarn-project/CHANGES.txt


> Make NMClient support change container resources
> 
>
> Key: YARN-1510
> URL: https://issues.apache.org/jira/browse/YARN-1510
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Wangda Tan (No longer used)
>Assignee: MENG DING
> Fix For: 2.8.0
>
> Attachments: YARN-1510-YARN-1197.1.patch, 
> YARN-1510-YARN-1197.2.patch, YARN-1510.3.patch, YARN-1510.4.patch, 
> YARN-1510.5.patch, YARN-1510.6.patch, YARN-1510.7.patch
>
>
> As described in YARN-1197, YARN-1449, we need add API in NMClient to support
> 1) sending request of increase/decrease container resource limits
> 2) get succeeded/failed changed containers response from NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4053:
---
Attachment: (was: YARN-4053-YARN-2928.03.patch)

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999837#comment-14999837
 ] 

Varun Saxena commented on YARN-4053:


Attached a new patch addressing points above. 
Added a ValueConverter interface and a ValueConverterImpl enum which contains 
GENERIC and LONG implementations.
In FlowScanner, will have to iterate over all the available column prefixes and 
columns to get hold of the right converter.

Haven't addressed TIME_SERIES related point as of now.
Can have it in the next patch once a consensus is reached for the 
implementation..

Functionally speaking, over the last patch I am now storing min start and max 
end time as longs as well.

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch, YARN-4053-YARN-2928.03.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4241) Typo in yarn-default.xml

2015-11-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-4241:

Target Version/s: 2.8.0, 2.6.3, 2.7.3
Priority: Major  (was: Trivial)
Hadoop Flags: Reviewed

I'd like to raise the priority to major because the configuration does not work 
if users copy-and-paste the wrong parameter. Checking this in.

> Typo in yarn-default.xml
> 
>
> Key: YARN-4241
> URL: https://issues.apache.org/jira/browse/YARN-4241
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, yarn
>Reporter: Anthony Rojas
>Assignee: Anthony Rojas
>  Labels: newbie
> Attachments: YARN-4241.002.patch, YARN-4241.003.patch, 
> YARN-4241.patch, YARN-4241.patch.1
>
>
> Typo in description section of yarn-default.xml, under the properties:
> yarn.nodemanager.disk-health-checker.min-healthy-disks
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
> yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
> yarn.nodemanager.disk-health-checker.disk-utilization-watermark-low-per-disk-percentage
> The reference to yarn-nodemanager.local-dirs should be 
> yarn.nodemanager.local-dirs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4241) Fix typo of property name in yarn-default.xml

2015-11-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-4241:

Affects Version/s: 2.6.0
  Component/s: (was: yarn)
  Summary: Fix typo of property name in yarn-default.xml  (was: 
Typo in yarn-default.xml)

> Fix typo of property name in yarn-default.xml
> -
>
> Key: YARN-4241
> URL: https://issues.apache.org/jira/browse/YARN-4241
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Anthony Rojas
>Assignee: Anthony Rojas
>  Labels: newbie
> Attachments: YARN-4241.002.patch, YARN-4241.003.patch, 
> YARN-4241.patch, YARN-4241.patch.1
>
>
> Typo in description section of yarn-default.xml, under the properties:
> yarn.nodemanager.disk-health-checker.min-healthy-disks
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
> yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
> yarn.nodemanager.disk-health-checker.disk-utilization-watermark-low-per-disk-percentage
> The reference to yarn-nodemanager.local-dirs should be 
> yarn.nodemanager.local-dirs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4241) Fix typo of property name in yarn-default.xml

2015-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1554#comment-1554
 ] 

Hudson commented on YARN-4241:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8790 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8790/])
YARN-4241. Fix typo of property name in yarn-default.xml. Contributed by 
(aajisaka: rev 23d0db551cc63def9acbab2473e58fb1c52f85e0)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* hadoop-yarn-project/CHANGES.txt


> Fix typo of property name in yarn-default.xml
> -
>
> Key: YARN-4241
> URL: https://issues.apache.org/jira/browse/YARN-4241
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Anthony Rojas
>Assignee: Anthony Rojas
>  Labels: newbie
> Fix For: 2.8.0, 2.6.3, 2.7.3
>
> Attachments: YARN-4241.002.patch, YARN-4241.003.patch, 
> YARN-4241.patch, YARN-4241.patch.1
>
>
> Typo in description section of yarn-default.xml, under the properties:
> yarn.nodemanager.disk-health-checker.min-healthy-disks
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
> yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
> yarn.nodemanager.disk-health-checker.disk-utilization-watermark-low-per-disk-percentage
> The reference to yarn-nodemanager.local-dirs should be 
> yarn.nodemanager.local-dirs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4241) Fix typo of property name in yarn-default.xml

2015-11-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-4241:

Attachment: YARN-4241.branch-2.7.patch

Attaching the patch used for branch-2.7 and branch-2.6. 
{{yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage}}
 was introduced by YARN-4176 and the property does not exist in branch-2.7 or 
branch-2.6, so I rebased the patch.



> Fix typo of property name in yarn-default.xml
> -
>
> Key: YARN-4241
> URL: https://issues.apache.org/jira/browse/YARN-4241
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Anthony Rojas
>Assignee: Anthony Rojas
>  Labels: newbie
> Fix For: 2.8.0, 2.6.3, 2.7.3
>
> Attachments: YARN-4241.002.patch, YARN-4241.003.patch, 
> YARN-4241.branch-2.7.patch, YARN-4241.patch, YARN-4241.patch.1
>
>
> Typo in description section of yarn-default.xml, under the properties:
> yarn.nodemanager.disk-health-checker.min-healthy-disks
> yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
> yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb
> yarn.nodemanager.disk-health-checker.disk-utilization-watermark-low-per-disk-percentage
> The reference to yarn-nodemanager.local-dirs should be 
> yarn.nodemanager.local-dirs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999697#comment-14999697
 ] 

Li Lu commented on YARN-4183:
-

Thanks [~vinodkv]! I agree we should find some better ways to organize 
\*.enabled in ATS (we've got two different versions in our code base and will 
add two more). For end users, we need to provide mechanisms to distinguish at 
least 3 versions of ATS API calls, v1, 1.5, and v2 in future. 

bq. There should be an explicit yarn.timeline-service.version which tells 
YarnClient to get tokens or not - yes for non-present version (default), v1, v2 
but no for v1.5.
bq.  The version field has semantics on both client and server side at the same 
time - it's picking a solution end-to-end.
IIUC, the newly proposed {{yarn.timeline-service.version}} supports a sanity 
check mechanism: each API should check if the current running ATS's version is 
equal to or higher than it's required version. For example, when a ATS v1.5 API 
is called, but {{yarn.timeline-service.version}} is set to v1, it should simply 
throw an exception. We can also decide if we need to get tokens or not in YARN 
client by checking this version number. 

We can distinguish API versions through their names. We need to keep the V1 
APIs unchanged, but add V15 and V2 after the new APIs to clarify their API 
version. Inside each V15 and V2 API we can perform the sanity check. 

Let's make the {{yarn.timeline-service.version}} change here. We can modify 
V1.5 APIs in YARN-4233/YARN-4234 and V2 APIs as a subtask of YARN-2928. I can 
open a new subtask in YARN-2928 to fix this for V2. 


> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1510) Make NMClient support change container resources

2015-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999710#comment-14999710
 ] 

Hudson commented on YARN-1510:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2593 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2593/])
YARN-1510. Make NMClient support change container resources. (Meng Ding 
(wangda: rev c99925d6dd0235f0d27536f0bebd129e435688fb)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java


> Make NMClient support change container resources
> 
>
> Key: YARN-1510
> URL: https://issues.apache.org/jira/browse/YARN-1510
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Wangda Tan (No longer used)
>Assignee: MENG DING
> Fix For: 2.8.0
>
> Attachments: YARN-1510-YARN-1197.1.patch, 
> YARN-1510-YARN-1197.2.patch, YARN-1510.3.patch, YARN-1510.4.patch, 
> YARN-1510.5.patch, YARN-1510.6.patch, YARN-1510.7.patch
>
>
> As described in YARN-1197, YARN-1449, we need add API in NMClient to support
> 1) sending request of increase/decrease container resource limits
> 2) get succeeded/failed changed containers response from NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999811#comment-14999811
 ] 

Naganarasimha G R commented on YARN-4183:
-

Thanks [~vinodkv] and [~gtCarrera], for clarifications for the approach. 
Few more clarifications i would like to have :
* How to support backward compatability for existing users who are already 
using *yarn.timeline-service.enabled* to get the tokens. 
* Do we need to support the use cases which were mentioned by Jonathan,
*# Suport Soft limit and hard settings for the clients : dont throw any 
exception if not able get timeline delegation token and may be job should be 
still be able to progress.
*# If the creation of the timelineclient based on this new 
{{yarn.timeline-service.version}} fails, then do we need to stop the RM ? or 
just keep on trying and once its able to contact then start pushing the system 
metrics events ?

[~gtCarrera],
bq.  I can open a new subtask in YARN-2928 to fix this for V2. 
Already Subtask YARN-3623 is raised for this, hope i can work on this ?


> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4053:
---
Attachment: YARN-4053-YARN-2928.03.patch

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch, YARN-4053-YARN-2928.03.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4053) Change the way metric values are stored in HBase Storage

2015-11-10 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4053:
---
Attachment: YARN-4053-YARN-2928.03.patch

> Change the way metric values are stored in HBase Storage
> 
>
> Key: YARN-4053
> URL: https://issues.apache.org/jira/browse/YARN-4053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4053-YARN-2928.01.patch, 
> YARN-4053-YARN-2928.02.patch, YARN-4053-YARN-2928.03.patch
>
>
> Currently HBase implementation uses GenericObjectMapper to convert and store 
> values in backend HBase storage. This converts everything into a string 
> representation(ASCII/UTF-8 encoded byte array).
> While this is fine in most cases, it does not quite serve our use case for 
> metrics. 
> So we need to decide how are we going to encode and decode metric values and 
> store them in HBase.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4183) Enabling generic application history forces every job to get a timeline service delegation token

2015-11-10 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999479#comment-14999479
 ] 

Naganarasimha G R commented on YARN-4183:
-

Hi [~jeagles],
We are not using ATS at scale you guys are using so you will be the better 
judge  but just my 2 cents  as an observer :
bq. The issue regarding this jira is that putting yarn.timeline-service.enabled 
in the client xml (breaks #2 above) forces every job (both MR (not using 
timeline service) and Tez (using timeline service)) to have a runtime 
dependency on the timeline service.
By having diff configurations for Tez and MR  clients can this problem get 
solved. Or i would prefer rather introducing one more config here which takes 
care of by passing the failure to get delegation tokens. And also as you 
mentioned for soft limit for applications can make use of the same parameter.
bq. 3.YARN services that interact with the timeline server (Generic History 
Server), may have runtime dependency of the timeline service that does not 
disrupt job submission
may be this also we can handle in a different jira if req we can start even if 
timeline client is not up and once up, System Metrics publisher can start 
accepting the timeline events. Thoughts ?
The purpose of "yarn.timeline-service.generic-application-history.enabled" is 
different as per the documentation. so instead we can either remove the check 
for  "TIMELINE_SERVICE_ENABLED" here than check for  
"yarn.timeline-service.generic-application-history.enabled" or create new 
configuration in the client to avoid creation of tokens when not req. Thoughts?

> Enabling generic application history forces every job to get a timeline 
> service delegation token
> 
>
> Key: YARN-4183
> URL: https://issues.apache.org/jira/browse/YARN-4183
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Fix For: 3.0.0, 2.8.0, 2.7.2
>
> Attachments: YARN-4183.1.patch
>
>
> When enabling just the Generic History Server and not the timeline server, 
> the system metrics publisher will not publish the events to the timeline 
> store as it checks if the timeline server and system metrics publisher are 
> enabled before creating a timeline client.
> To make it work, if the timeline service flag is turned on, it will force 
> every yarn application to get a delegation token.
> Instead of checking if timeline service is enabled, we should be checking if 
> application history server is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2047) RM should honor NM heartbeat expiry after RM restart

2015-11-10 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14998572#comment-14998572
 ] 

Jun Gong commented on YARN-2047:


I am not sure why AM cannot be trusted, but information about running 
containers could be regarded as a reference, and they are only used for 
specific usage described above.

> RM should honor NM heartbeat expiry after RM restart
> 
>
> Key: YARN-2047
> URL: https://issues.apache.org/jira/browse/YARN-2047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Bikas Saha
>
> After the RM restarts, it forgets about existing NM's (and their potentially 
> decommissioned status too). After restart, the RM cannot maintain the 
> contract to the AM's that a lost NM's containers will be marked finished 
> within the expiry time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1510) Make NMClient support change container resources

2015-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14999860#comment-14999860
 ] 

Hudson commented on YARN-1510:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2532 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2532/])
YARN-1510. Make NMClient support change container resources. (Meng Ding 
(wangda: rev c99925d6dd0235f0d27536f0bebd129e435688fb)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java


> Make NMClient support change container resources
> 
>
> Key: YARN-1510
> URL: https://issues.apache.org/jira/browse/YARN-1510
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Wangda Tan (No longer used)
>Assignee: MENG DING
> Fix For: 2.8.0
>
> Attachments: YARN-1510-YARN-1197.1.patch, 
> YARN-1510-YARN-1197.2.patch, YARN-1510.3.patch, YARN-1510.4.patch, 
> YARN-1510.5.patch, YARN-1510.6.patch, YARN-1510.7.patch
>
>
> As described in YARN-1197, YARN-1449, we need add API in NMClient to support
> 1) sending request of increase/decrease container resource limits
> 2) get succeeded/failed changed containers response from NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)