[jira] [Updated] (YARN-4328) Findbugs warning in resourcemanager in branch-2.7 and branch-2.6

2016-02-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated YARN-4328:

Attachment: YARN-4328.branch-2.7.01.patch

Rebased.

> Findbugs warning in resourcemanager in branch-2.7 and branch-2.6
> 
>
> Key: YARN-4328
> URL: https://issues.apache.org/jira/browse/YARN-4328
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1, 2.6.2
>Reporter: Varun Saxena
>Assignee: Akira AJISAKA
>Priority: Minor
> Attachments: YARN-4328.branch-2.6.00.patch, 
> YARN-4328.branch-2.7.00.patch, YARN-4328.branch-2.7.00.patch, 
> YARN-4328.branch-2.7.01.patch
>
>
> This issue exists in both branch-2.7 and branch-2.6
> {noformat}
>  classname='org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKSyncOperationCallback'>
>  category='PERFORMANCE' message='Should 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKSyncOperationCallback
>  be a _static_ inner class?' lineNumber='118'/>
> {noformat}
> Below issue exists only in branch-2.6
> {noformat}
>  classname='org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt'>
>  category='MT_CORRECTNESS' message='Inconsistent synchronization of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.queue;
>  locked 57% of time' lineNumber='261'/>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4712) CPU Usage Metric is not captured properly in YARN-2928

2016-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156478#comment-15156478
 ] 

Sunil G commented on YARN-4712:
---

Hi [~Naganarasimha Garla]
For point 1, YARN-4308 was also trying to indicate a possible -ve CPU usage. Is 
it  similar?

> CPU Usage Metric is not captured properly in YARN-2928
> --
>
> Key: YARN-4712
> URL: https://issues.apache.org/jira/browse/YARN-4712
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>
> There are 2 issues with CPU usage collection 
> * I was able to observe that that many times CPU usage got from 
> {{pTree.getCpuUsagePercent()}} is 
> ResourceCalculatorProcessTree.UNAVAILABLE(i.e. -1) but ContainersMonitor do 
> the calculation  i.e. {{cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore 
> /resourceCalculatorPlugin.getNumProcessors()}} because of which UNAVAILABLE 
> check in {{NMTimelinePublisher.reportContainerResourceUsage}} is not 
> encountered. so proper checks needs to be handled
> * {{EntityColumnPrefix.METRIC}} uses always LongConverter but 
> ContainerMonitor is publishing decimal values for the CPU usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4465) SchedulerUtils#validateRequest for Label check should happen only when nodelabel enabled

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156466#comment-15156466
 ] 

Hadoop QA commented on YARN-4465:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 0 new + 20 unchanged - 1 fixed = 20 total (was 21) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 24s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 59s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 159m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788728/0006-YARN-4465.patch 

[jira] [Updated] (YARN-4465) SchedulerUtils#validateRequest for Label check should happen only when nodelabel enabled

2016-02-21 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4465:
---
Attachment: 0006-YARN-4465.patch

Uploading patch to trigger Jenkins again

> SchedulerUtils#validateRequest for Label check should happen only when 
> nodelabel enabled
> 
>
> Key: YARN-4465
> URL: https://issues.apache.org/jira/browse/YARN-4465
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: 0001-YARN-4465.patch, 0002-YARN-4465.patch, 
> 0003-YARN-4465.patch, 0004-YARN-4465.patch, 0006-YARN-4465.patch
>
>
> Disable label from rm side yarn.nodelabel.enable=false
> Capacity scheduler label configuration for queue is available as below
> default label for queue = b1 as 3 and accessible labels as 1,3
> Submit application to queue A .
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):
>  Invalid resource request, queue=b1 doesn't have permission to access all 
> labels in resource request. labelExpression of resource request=3. Queue 
> labels=1,3
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:304)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:216)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:401)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:283)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:602)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:247)
> {noformat}
> # Ignore default label expression when label is disabled *or*
> # NormalizeResourceRequest we can set label expression to  
> when node label is not enabled *or*
> # Improve message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4465) SchedulerUtils#validateRequest for Label check should happen only when nodelabel enabled

2016-02-21 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4465:
---
Attachment: (was: 0006-YARN-4465.patch)

> SchedulerUtils#validateRequest for Label check should happen only when 
> nodelabel enabled
> 
>
> Key: YARN-4465
> URL: https://issues.apache.org/jira/browse/YARN-4465
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: 0001-YARN-4465.patch, 0002-YARN-4465.patch, 
> 0003-YARN-4465.patch, 0004-YARN-4465.patch, 0006-YARN-4465.patch
>
>
> Disable label from rm side yarn.nodelabel.enable=false
> Capacity scheduler label configuration for queue is available as below
> default label for queue = b1 as 3 and accessible labels as 1,3
> Submit application to queue A .
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):
>  Invalid resource request, queue=b1 doesn't have permission to access all 
> labels in resource request. labelExpression of resource request=3. Queue 
> labels=1,3
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:304)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:234)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:216)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:401)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:283)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:602)
> at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:247)
> {noformat}
> # Ignore default label expression when label is disabled *or*
> # NormalizeResourceRequest we can set label expression to  
> when node label is not enabled *or*
> # Improve message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4713) Warning by unchecked conversion in TestTimelineWebServices

2016-02-21 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created YARN-4713:


 Summary: Warning by unchecked conversion in 
TestTimelineWebServices 
 Key: YARN-4713
 URL: https://issues.apache.org/jira/browse/YARN-4713
 Project: Hadoop YARN
  Issue Type: Test
  Components: test
Reporter: Tsuyoshi Ozawa


[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestTimelineWebServices.java:[123,38]
 [unchecked] unchecked conversion

{code}
  Enumeration names = mock(Enumeration.class);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4708) Missing default mapper type in TimelineServer performance test tool usage

2016-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156276#comment-15156276
 ] 

Hudson commented on YARN-4708:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9336 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9336/])
YARN-4708. Missing default mapper type in TimelineServer performance (ozawa: 
rev b68901d7dde9cb48545fcf0b94f2ac266b909a5d)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/TimelineServicePerformance.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md


> Missing default mapper type in TimelineServer performance test tool usage
> -
>
> Key: YARN-4708
> URL: https://issues.apache.org/jira/browse/YARN-4708
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-4708.01.patch
>
>
> TimelineServer performance test tool uses SimpleEntityWriter as default 
> mapper. It can be indicated explicitly in usage of the tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4708) Missing default mapper type in TimelineServer performance test tool usage

2016-02-21 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156267#comment-15156267
 ] 

Tsuyoshi Ozawa commented on YARN-4708:
--

+1, checking this in.

> Missing default mapper type in TimelineServer performance test tool usage
> -
>
> Key: YARN-4708
> URL: https://issues.apache.org/jira/browse/YARN-4708
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-4708.01.patch
>
>
> TimelineServer performance test tool uses SimpleEntityWriter as default 
> mapper. It can be indicated explicitly in usage of the tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4708) Missing default mapper type in TimelineServer performance test tool usage

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156237#comment-15156237
 ] 

Hadoop QA commented on YARN-4708:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-yarn-site in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 104m 5s {color} 
| {color:red} hadoop-mapreduce-client-jobclient in the patch failed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hadoop-yarn-site in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 37s 

[jira] [Commented] (YARN-4484) Available Resource calculation for a queue is not correct when used with labels

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156235#comment-15156235
 ] 

Hadoop QA commented on YARN-4484:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 0 new + 11 unchanged - 2 fixed = 11 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 11s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 163m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-4700) ATS storage has one extra record each time the RM got restarted

2016-02-21 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156208#comment-15156208
 ] 

Li Lu commented on YARN-4700:
-

[~Naganarasimha] sure, please go ahead and take it. Thanks! 

> ATS storage has one extra record each time the RM got restarted
> ---
>
> Key: YARN-4700
> URL: https://issues.apache.org/jira/browse/YARN-4700
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Li Lu
>Assignee: Naganarasimha G R
>
> When testing the new web UI for ATS v2, I noticed that we're creating one 
> extra record for each finished application (but still hold in the RM state 
> store) each time the RM got restarted. It's quite possible that we add the 
> cluster start timestamp into the default cluster id, thus each time we're 
> creating a new record for one application (cluster id is a part of the row 
> key). We need to fix this behavior, probably by having a better default 
> cluster id. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4648) Move preemption related tests from TestFairScheduler to TestFairSchedulerPreemption

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156206#comment-15156206
 ] 

Hadoop QA commented on YARN-4648:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 42s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 37s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 162m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1270/YARN-4648.03.patch |
| JIRA Issue | YARN-4648 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| 

[jira] [Commented] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156200#comment-15156200
 ] 

Hadoop QA commented on YARN-4710:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 47s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 17s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 153m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 

[jira] [Created] (YARN-4712) CPU Usage Metric is not captured properly in YARN-2928

2016-02-21 Thread Naganarasimha G R (JIRA)
Naganarasimha G R created YARN-4712:
---

 Summary: CPU Usage Metric is not captured properly in YARN-2928
 Key: YARN-4712
 URL: https://issues.apache.org/jira/browse/YARN-4712
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R


There are 2 issues with CPU usage collection 
* I was able to observe that that many times CPU usage got from 
{{pTree.getCpuUsagePercent()}} is 
ResourceCalculatorProcessTree.UNAVAILABLE(i.e. -1) but ContainersMonitor do the 
calculation  i.e. {{cpuUsageTotalCoresPercentage = cpuUsagePercentPerCore 
/resourceCalculatorPlugin.getNumProcessors()}} because of which UNAVAILABLE 
check in {{NMTimelinePublisher.reportContainerResourceUsage}} is not 
encountered. so proper checks needs to be handled
* {{EntityColumnPrefix.METRIC}} uses always LongConverter but ContainerMonitor 
is publishing decimal values for the CPU usage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4709) Exception when option to fetch all log files is specified while using yarn logs -am command and incorrect JSON produced for containerLogFiles

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156153#comment-15156153
 ] 

Hadoop QA commented on YARN-4709:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 1s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 26s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 55s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK 

[jira] [Updated] (YARN-4484) Available Resource calculation for a queue is not correct when used with labels

2016-02-21 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4484:
--
Attachment: 0002-YARN-4484.patch

Thank you [~leftnoteasy] for the comments. Uploading new patch addressing the 
same.
[~bibinchundatt], could you pls help to explain the case. is this the case when 
label-queue mapping is not available but labels are added in cluster (and few 
nodes are configured with this label.
In this case, we will have  nodes will show correct metrics. 
Other nodes will be there but queue metrics wont show the same. WE have couple 
of JIRAs (YARN-4634) which handles some more vailidation for queue-label 
mapping cases. With those, we can get the above mentioned behavior. pls help to 
correct if i am wrong.

> Available Resource calculation for a queue is not correct when used with 
> labels
> ---
>
> Key: YARN-4484
> URL: https://issues.apache.org/jira/browse/YARN-4484
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4484.patch, 0002-YARN-4484.patch
>
>
> To calculate available resource for a queue, we have to get the total 
> resource allocated for all labels in queue compare to its usage. 
> Also address the comments given in 
> [YARN-4304-comments|https://issues.apache.org/jira/browse/YARN-4304?focusedCommentId=15064874=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15064874
>  ] given by [~leftnoteasy] for same.
> ClusterMetrics related issues will also get handled once we fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4700) ATS storage has one extra record each time the RM got restarted

2016-02-21 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156126#comment-15156126
 ] 

Naganarasimha G R commented on YARN-4700:
-

hi [~gtCarrera9],  Thought  of taking this issue up, please reassign if you 
have already started working on this !
I think we can better consider default clusterID as static constant like 
"mycluster" / "yarncluster" ... thoughts?
cc /[~sjlee0]

> ATS storage has one extra record each time the RM got restarted
> ---
>
> Key: YARN-4700
> URL: https://issues.apache.org/jira/browse/YARN-4700
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Li Lu
>Assignee: Naganarasimha G R
>
> When testing the new web UI for ATS v2, I noticed that we're creating one 
> extra record for each finished application (but still hold in the RM state 
> store) each time the RM got restarted. It's quite possible that we add the 
> cluster start timestamp into the default cluster id, thus each time we're 
> creating a new record for one application (cluster id is a part of the row 
> key). We need to fix this behavior, probably by having a better default 
> cluster id. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-4700) ATS storage has one extra record each time the RM got restarted

2016-02-21 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R reassigned YARN-4700:
---

Assignee: Naganarasimha G R

> ATS storage has one extra record each time the RM got restarted
> ---
>
> Key: YARN-4700
> URL: https://issues.apache.org/jira/browse/YARN-4700
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Li Lu
>Assignee: Naganarasimha G R
>
> When testing the new web UI for ATS v2, I noticed that we're creating one 
> extra record for each finished application (but still hold in the RM state 
> store) each time the RM got restarted. It's quite possible that we add the 
> cluster start timestamp into the default cluster id, thus each time we're 
> creating a new record for one application (cluster id is a part of the row 
> key). We need to fix this behavior, probably by having a better default 
> cluster id. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4711) NPE in NMTimelinePublisher

2016-02-21 Thread Naganarasimha G R (JIRA)
Naganarasimha G R created YARN-4711:
---

 Summary: NPE in NMTimelinePublisher
 Key: YARN-4711
 URL: https://issues.apache.org/jira/browse/YARN-4711
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
Priority: Critical


While testing the latest 2928 branch came across a NPE which is shutting down 
the NM
{code}
2016-02-21 23:19:54,078 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
Error in dispatcher thread
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.nodemanager.timelineservice.NMTimelinePublisher$ContainerEventHandler.handle(NMTimelinePublisher.java:306)
at 
org.apache.hadoop.yarn.server.nodemanager.timelineservice.NMTimelinePublisher$ContainerEventHandler.handle(NMTimelinePublisher.java:296)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
at java.lang.Thread.run(Thread.java:745)
{code}
Seems to be a race condition ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4706) UI Hosting Configuration in TimelineServer doc is broken

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156117#comment-15156117
 ] 

Hadoop QA commented on YARN-4706:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 56s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788861/YARN-4706.01.patch |
| JIRA Issue | YARN-4706 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux bf3289b2be13 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d5abd29 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10593/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> UI Hosting Configuration in TimelineServer doc is broken
> 
>
> Key: YARN-4706
> URL: https://issues.apache.org/jira/browse/YARN-4706
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Critical
>  Labels: newbie
> Attachments: YARN-4706.01.patch
>
>
> The table of UI hosting configuration is broken.
> https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/TimelineServer.html#UI_Hosting_Configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4707) Remove the extra char (>) from the SecureContainer.md

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156110#comment-15156110
 ] 

Hadoop QA commented on YARN-4707:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 16s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1275/YARN-4707.patch |
| JIRA Issue | YARN-4707 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux af671091f548 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d5abd29 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10595/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove the extra char (>) from the SecureContainer.md
> -
>
> Key: YARN-4707
> URL: https://issues.apache.org/jira/browse/YARN-4707
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: YARN-4707.patch
>
>
> Section : Linux  Secure Container Executor 
> It uses an external program called the  
> *{color:red}container-executor>{color}*  to launch the container.
> I think, we can Remove ">" here.
>  
> Reference:
> https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/SecureContainer.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4517) [YARN-3368] Add nodes page

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15156096#comment-15156096
 ] 

Hadoop QA commented on YARN-4517:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 54s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 37s 
{color} | {color:red} Patch generated 96 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 2m 8s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788903/YARN-4517-YARN-3368.01.patch
 |
| JIRA Issue | YARN-4517 |
| Optional Tests |  asflicense  |
| uname | Linux e920ae3f2186 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 37455e7 |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/10591/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10591/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>  Labels: webui
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip, 
> YARN-4517-YARN-3368.01.patch
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4709) Exception when option to fetch all log files is specified while using yarn logs -am command and incorrect JSON produced for containerLogFiles

2016-02-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155997#comment-15155997
 ] 

Varun Saxena commented on YARN-4709:


cc [~vvasudev].
Tagging [~leftnoteasy] too as this JIRA has implications on YARN-4517(which is 
related to new Web UI implementation). 

> Exception when option to fetch all log files is specified while using yarn 
> logs -am command and incorrect JSON produced for containerLogFiles
> -
>
> Key: YARN-4709
> URL: https://issues.apache.org/jira/browse/YARN-4709
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4709.01.patch
>
>
> Following exception is thrown when we run below command.
> {panel}
> root@varun-Inspiron-5558:/opt1/hadoop3/bin# ./yarn logs -applicationId 
> application_1455999168135_0002 -am ALL -logFiles ALL
> Container: container_e31_1455999168135_0002_01_01
> ===
> {color:red}LogType:syslogstderrstdout
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> java.lang.Exception: Cannot find this log on the local disk.
> End of LogType:syslogstderrstdout{color}
> LogType:syslog
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> 2016-02-21 01:44:49,565 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1455999168135_0002_01
> 2016-02-21 01:44:49,914 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: 
> /
> {panel}
> This is because we annotate containerLogFiles list with XmlElementWrapper 
> which generates XML output as under. And when we read this XML at client 
> side, reading the value associated with containerLogFiles also leads to one 
> value being syslogstderrstdout because both parent and child tags are same. 
> This leads to the exception. 
> {noformat}
> 
>   syslog
>   stderr
>   stdout
> 
> {noformat}
> Moreover, as we use XMLElementWrapper, the JSON generated is as under. This 
> JSON cannot be properly parsed by JSON parser(as a list). This is because 
> child containerLogsFiles entries are treated as a key-value pair(map) and 
> hence only last entry i.e. stdout is picked up. This was found while working 
> on YARN-4517. This makes output unusable. 
> This will be an issue for 2 REST endpoints i.e. {{/ws/v1/node/containers}} 
> and {{/ws/v1/node/containers/\{\{containerId\}\}}}
> {noformat}
>   "containerLogFiles":[
> {
>   "containerLogFiles":"syslog",
>   "containerLogFiles":"stderr",
>   "containerLogFiles":"stdout"
> }
>   ]
> {noformat}
> Ideally the JSON output should be as under.
> {noformat}
> "containerLogFiles":["syslog","stderr","stdout"]
> {noformat}
> We can indicate in the JAXB context to ignore the outer wrapper while 
> marshalling to JSON. But this can only be done at class level. If we do so 
> for ContainerInfo, it would break backward compatibility.
> Hence, to fix it we can remove XmlElementWrapper annotation for 
> containerLogFiles list.
> Another solution would be to wrap the list inside another class.
> But going with former as of now as we do not specify XmlElementWrapper for 
> lists at most of the places in our code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4709) Exception when option to fetch all log files is specified while using yarn logs -am command and incorrect JSON produced for containerLogFiles

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155994#comment-15155994
 ] 

Hadoop QA commented on YARN-4709:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 58s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 55s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 46s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 17s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem 

[jira] [Updated] (YARN-4517) [YARN-3368] Add nodes page

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4517:
---
Attachment: (was: YARN-4517-YARN-3368.01.patch)

> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>  Labels: webui
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip, 
> YARN-4517-YARN-3368.01.patch
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4517) [YARN-3368] Add nodes page

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4517:
---
Attachment: YARN-4517-YARN-3368.01.patch

> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>  Labels: webui
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip, 
> YARN-4517-YARN-3368.01.patch
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4517) [YARN-3368] Add nodes page

2016-02-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155987#comment-15155987
 ] 

Varun Saxena commented on YARN-4517:


[~leftnoteasy], [~jianhe], kindly review.
I have attached the screenshots too.

*The patch does the following.*
# Implements RM nodes page.
# Implements all NM pages, including node info page, apps page, single app page 
listing containers for the app, containers page, single container page and 
container logs page. Give exactly the same information as current web UI.
# Added natural sort for sorting app ids'.
# Global error handler to display 404 or some other error page(a basic text 
page, can be improved if required later). Custom error pages like 404 can be 
added later. We can also make other decisions based on error code as well(say, 
retry).
# Make cluster overview page as the home page i.e. we will no longer see an 
empty page with top level menu bar on accessing {{http://localhost:4200}}. 
Should cluster overview instead of queues be the leftmost tab ?
# Added donut graphs on Node Information page to display node resource usage. 
Other graphs we can discuss and add later.
# Also added handling for the case where server returns no apps or containers. 
For this, I am basically creating a dummy response from serializer. This might 
not be the best way to handle it in Ember but I could not come up with much 
else until now.
# Make the tabs in top level menu active based on the tab being accessed.

*Open issues/points :*
# The JSON coming from NM for containers and container endpoint is incorrect. 
Because of this only one log file is seen in the logs link of containers page 
in UI. This is because jQuery's JSON parser, only picks up the last log file 
value. I have raised YARN-4709 to track this. After that goes, code here will 
have to be changed here to display links to all 3 log files.
# The heading(which shows NM IP and port i.e. NM ID) on top of left hand side 
menu overflows out of the panel for certain browser dimensions. If I insert a 
space between host and port, it works fine, but space doesnt look good. Will 
have to explore a bit on how to handle it.
# We can probably add some graphs on the app page to capture container 
lifecycle by returning timestamps of events like localization, launching, etc.. 
This can be done later.
# Haven't added any tests so far. Will have to explore how to add them. Will 
probably do after first round of review.

The ASF license warnings are due to missing of apache header. I had added it 
for new files I added. Should I add it for other files too ?

> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>  Labels: webui
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip, 
> YARN-4517-YARN-3368.01.patch
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4517) [YARN-3368] Add nodes page

2016-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155986#comment-15155986
 ] 

Hadoop QA commented on YARN-4517:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 49s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 32s 
{color} | {color:red} Patch generated 96 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12788900/YARN-4517-YARN-3368.01.patch
 |
| JIRA Issue | YARN-4517 |
| Optional Tests |  asflicense  |
| uname | Linux ba2e99b9610e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 37455e7 |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/10588/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10588/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>  Labels: webui
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip, 
> YARN-4517-YARN-3368.01.patch
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4709) Exception when option to fetch all log files is specified while using yarn logs -am command and incorrect JSON produced for containerLogFiles

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4709:
---
Summary: Exception when option to fetch all log files is specified while 
using yarn logs -am command and incorrect JSON produced for containerLogFiles  
(was: Exception when option to fetch all log files is specified using yarn logs 
-am command and incorrect JSON produced for containerLogFiles)

> Exception when option to fetch all log files is specified while using yarn 
> logs -am command and incorrect JSON produced for containerLogFiles
> -
>
> Key: YARN-4709
> URL: https://issues.apache.org/jira/browse/YARN-4709
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4709.01.patch
>
>
> Following exception is thrown when we run below command.
> {panel}
> root@varun-Inspiron-5558:/opt1/hadoop3/bin# ./yarn logs -applicationId 
> application_1455999168135_0002 -am ALL -logFiles ALL
> Container: container_e31_1455999168135_0002_01_01
> ===
> {color:red}LogType:syslogstderrstdout
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> java.lang.Exception: Cannot find this log on the local disk.
> End of LogType:syslogstderrstdout{color}
> LogType:syslog
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> 2016-02-21 01:44:49,565 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1455999168135_0002_01
> 2016-02-21 01:44:49,914 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: 
> /
> {panel}
> This is because we annotate containerLogFiles list with XmlElementWrapper 
> which generates XML output as under. And when we read this XML at client 
> side, reading the value associated with containerLogFiles also leads to one 
> value being syslogstderrstdout because both parent and child tags are same. 
> This leads to the exception. 
> {noformat}
> 
>   syslog
>   stderr
>   stdout
> 
> {noformat}
> Moreover, as we use XMLElementWrapper, the JSON generated is as under. This 
> JSON cannot be properly parsed by JSON parser(as a list). This is because 
> child containerLogsFiles entries are treated as a key-value pair(map) and 
> hence only last entry i.e. stdout is picked up. This was found while working 
> on YARN-4517. This makes output unusable. 
> This will be an issue for 2 REST endpoints i.e. {{/ws/v1/node/containers}} 
> and {{/ws/v1/node/containers/\{\{containerId\}\}}}
> {noformat}
>   "containerLogFiles":[
> {
>   "containerLogFiles":"syslog",
>   "containerLogFiles":"stderr",
>   "containerLogFiles":"stdout"
> }
>   ]
> {noformat}
> Ideally the JSON output should be as under.
> {noformat}
> "containerLogFiles":["syslog","stderr","stdout"]
> {noformat}
> We can indicate in the JAXB context to ignore the outer wrapper while 
> marshalling to JSON. But this can only be done at class level. If we do so 
> for ContainerInfo, it would break backward compatibility.
> Hence, to fix it we can remove XmlElementWrapper annotation for 
> containerLogFiles list.
> Another solution would be to wrap the list inside another class.
> But going with former as of now as we do not specify XmlElementWrapper for 
> lists at most of the places in our code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4517) [YARN-3368] Add nodes page

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4517:
---
Component/s: yarn

> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>  Labels: webui
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip, 
> YARN-4517-YARN-3368.01.patch
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4517) [YARN-3368] Add nodes page

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4517:
---
Attachment: YARN-4517-YARN-3368.01.patch

> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Varun Saxena
>  Labels: webui
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip, 
> YARN-4517-YARN-3368.01.patch
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4517) [YARN-3368] Add nodes page

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4517:
---
Attachment: (21-Feb-2016)yarn-ui-screenshots.zip

> [YARN-3368] Add nodes page
> --
>
> Key: YARN-4517
> URL: https://issues.apache.org/jira/browse/YARN-4517
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: (21-Feb-2016)yarn-ui-screenshots.zip
>
>
> We need nodes page added to next generation web UI, similar to existing 
> RM/nodes page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-02-21 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155970#comment-15155970
 ] 

Lin Yiqun commented on YARN-4710:
-

In my opinion, this reserved record can be printed when successfully assigning 
container and printing other concrete info.

> Reduce logging application reserved debug info in FSAppAttempt#assignContainer
> --
>
> Key: YARN-4710
> URL: https://issues.apache.org/jira/browse/YARN-4710
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>Priority: Minor
> Attachments: YARN-4710.001.patch, yarn-debug.log
>
>
> I found lots of unimportant records info in assigning container when I 
> prepared to debug the problem of container assigning.There are too many 
> records like this in yarn-resourcemanager.log, and it's difficiult for me to 
> directly to found the important info.
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,971 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,976 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,981 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,986 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,991 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,996 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,001 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,007 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,012 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,017 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,022 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,027 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,032 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,038 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,050 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,057 DEBUG
> {code}
> The reason why of so many records is that it always print this info first in 
> container assigning whether the assigned result is successful or failed.
> Can see the complete yarn log in updated log, and you can see how many 
> records there are.
> And in addition, too many these info logging will slow down process of 
> container assigning.Maybe we should change this logLevel to other level, like 
> {{trace}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-02-21 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated YARN-4710:

Attachment: YARN-4710.001.patch

Attach a initial patch. The patch change this logLevel from {{DEBUG}} to 
{{TRACE}}, kindly review, thanks.

> Reduce logging application reserved debug info in FSAppAttempt#assignContainer
> --
>
> Key: YARN-4710
> URL: https://issues.apache.org/jira/browse/YARN-4710
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>Priority: Minor
> Attachments: YARN-4710.001.patch, yarn-debug.log
>
>
> I found lots of unimportant records info in assigning container when I 
> prepared to debug the problem of container assigning.There are too many 
> records like this in yarn-resourcemanager.log, and it's difficiult for me to 
> directly to found the important info.
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,971 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,976 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,981 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,986 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,991 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,996 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,001 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,007 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,012 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,017 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,022 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,027 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,032 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,038 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,050 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,057 DEBUG
> {code}
> The reason why of so many records is that it always print this info first in 
> container assigning whether the assigned result is successful or failed.
> Can see the complete yarn log in updated log, and you can see how many 
> records there are.
> And in addition, too many these info logging will slow down process of 
> container assigning.Maybe we should change this logLevel to other level, like 
> {{trace}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4709) Exception when option to fetch all log files is specified using yarn logs -am command and incorrect JSON produced for containerLogFiles

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4709:
---
Summary: Exception when option to fetch all log files is specified using 
yarn logs -am command and incorrect JSON produced for containerLogFiles  (was: 
Exception when option to fetch all log files is specified using yarn logs -am 
command and unusable JSON produced for containerLogFiles)

> Exception when option to fetch all log files is specified using yarn logs -am 
> command and incorrect JSON produced for containerLogFiles
> ---
>
> Key: YARN-4709
> URL: https://issues.apache.org/jira/browse/YARN-4709
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4709.01.patch
>
>
> Following exception is thrown when we run below command.
> {panel}
> root@varun-Inspiron-5558:/opt1/hadoop3/bin# ./yarn logs -applicationId 
> application_1455999168135_0002 -am ALL -logFiles ALL
> Container: container_e31_1455999168135_0002_01_01
> ===
> {color:red}LogType:syslogstderrstdout
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> java.lang.Exception: Cannot find this log on the local disk.
> End of LogType:syslogstderrstdout{color}
> LogType:syslog
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> 2016-02-21 01:44:49,565 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1455999168135_0002_01
> 2016-02-21 01:44:49,914 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: 
> /
> {panel}
> This is because we annotate containerLogFiles list with XmlElementWrapper 
> which generates XML output as under. And when we read this XML at client 
> side, reading the value associated with containerLogFiles also leads to one 
> value being syslogstderrstdout because both parent and child tags are same. 
> This leads to the exception. 
> {noformat}
> 
>   syslog
>   stderr
>   stdout
> 
> {noformat}
> Moreover, as we use XMLElementWrapper, the JSON generated is as under. This 
> JSON cannot be properly parsed by JSON parser(as a list). This is because 
> child containerLogsFiles entries are treated as a key-value pair(map) and 
> hence only last entry i.e. stdout is picked up. This was found while working 
> on YARN-4517. This makes output unusable. 
> This will be an issue for 2 REST endpoints i.e. {{/ws/v1/node/containers}} 
> and {{/ws/v1/node/containers/\{\{containerId\}\}}}
> {noformat}
>   "containerLogFiles":[
> {
>   "containerLogFiles":"syslog",
>   "containerLogFiles":"stderr",
>   "containerLogFiles":"stdout"
> }
>   ]
> {noformat}
> Ideally the JSON output should be as under.
> {noformat}
> "containerLogFiles":["syslog","stderr","stdout"]
> {noformat}
> We can indicate in the JAXB context to ignore the outer wrapper while 
> marshalling to JSON. But this can only be done at class level. If we do so 
> for ContainerInfo, it would break backward compatibility.
> Hence, to fix it we can remove XmlElementWrapper annotation for 
> containerLogFiles list.
> Another solution would be to wrap the list inside another class.
> But going with former as of now as we do not specify XmlElementWrapper for 
> lists at most of the places in our code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4709) Exception when option to fetch all log files is specified using yarn logs -am command and unusable JSON produced for containerLogFiles

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4709:
---
Summary: Exception when option to fetch all log files is specified using 
yarn logs -am command and unusable JSON produced for containerLogFiles  (was: 
Exception while fetching all log files using yarn logs -am command and unusable 
JSON produced for containerLogFiles)

> Exception when option to fetch all log files is specified using yarn logs -am 
> command and unusable JSON produced for containerLogFiles
> --
>
> Key: YARN-4709
> URL: https://issues.apache.org/jira/browse/YARN-4709
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4709.01.patch
>
>
> Following exception is thrown when we run below command.
> {panel}
> root@varun-Inspiron-5558:/opt1/hadoop3/bin# ./yarn logs -applicationId 
> application_1455999168135_0002 -am ALL -logFiles ALL
> Container: container_e31_1455999168135_0002_01_01
> ===
> {color:red}LogType:syslogstderrstdout
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> java.lang.Exception: Cannot find this log on the local disk.
> End of LogType:syslogstderrstdout{color}
> LogType:syslog
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> 2016-02-21 01:44:49,565 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1455999168135_0002_01
> 2016-02-21 01:44:49,914 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: 
> /
> {panel}
> This is because we annotate containerLogFiles list with XmlElementWrapper 
> which generates XML output as under. And when we read this XML at client 
> side, reading the value associated with containerLogFiles also leads to one 
> value being syslogstderrstdout because both parent and child tags are same. 
> This leads to the exception. 
> {noformat}
> 
>   syslog
>   stderr
>   stdout
> 
> {noformat}
> Moreover, as we use XMLElementWrapper, the JSON generated is as under. This 
> JSON cannot be properly parsed by JSON parser(as a list). This is because 
> child containerLogsFiles entries are treated as a key-value pair(map) and 
> hence only last entry i.e. stdout is picked up. This was found while working 
> on YARN-4517. This makes output unusable. 
> This will be an issue for 2 REST endpoints i.e. {{/ws/v1/node/containers}} 
> and {{/ws/v1/node/containers/\{\{containerId\}\}}}
> {noformat}
>   "containerLogFiles":[
> {
>   "containerLogFiles":"syslog",
>   "containerLogFiles":"stderr",
>   "containerLogFiles":"stdout"
> }
>   ]
> {noformat}
> Ideally the JSON output should be as under.
> {noformat}
> "containerLogFiles":["syslog","stderr","stdout"]
> {noformat}
> We can indicate in the JAXB context to ignore the outer wrapper while 
> marshalling to JSON. But this can only be done at class level. If we do so 
> for ContainerInfo, it would break backward compatibility.
> Hence, to fix it we can remove XmlElementWrapper annotation for 
> containerLogFiles list.
> Another solution would be to wrap the list inside another class.
> But going with former as of now as we do not specify XmlElementWrapper for 
> lists at most of the places in our code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-02-21 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated YARN-4710:

Attachment: yarn-debug.log

> Reduce logging application reserved debug info in FSAppAttempt#assignContainer
> --
>
> Key: YARN-4710
> URL: https://issues.apache.org/jira/browse/YARN-4710
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>Priority: Minor
> Attachments: yarn-debug.log
>
>
> I found lots of unimportant records info in assigning container when I 
> prepared to debug the problem of container assigning.There are too many 
> records like this in yarn-resourcemanager.log, and it's difficiult for me to 
> directly to found the important info.
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,971 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,976 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,981 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,986 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,991 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:52,996 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,001 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,007 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,012 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,017 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,022 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,027 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,032 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,038 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,050 DEBUG 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: 
> Node offered to app: application_1449458968698_0011 reserved: false
> 2016-02-21 16:31:53,057 DEBUG
> {code}
> The reason why of so many records is that it always print this info first in 
> container assigning whether the assigned result is successful or failed.
> Can see the complete yarn log in updated log, and you can see how many 
> records there are.
> And in addition, too many these info logging will slow down process of 
> container assigning.Maybe we should change this logLevel to other level, like 
> {{trace}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4710) Reduce logging application reserved debug info in FSAppAttempt#assignContainer

2016-02-21 Thread Lin Yiqun (JIRA)
Lin Yiqun created YARN-4710:
---

 Summary: Reduce logging application reserved debug info in 
FSAppAttempt#assignContainer
 Key: YARN-4710
 URL: https://issues.apache.org/jira/browse/YARN-4710
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Affects Versions: 2.7.1
Reporter: Lin Yiqun
Assignee: Lin Yiqun
Priority: Minor


I found lots of unimportant records info in assigning container when I prepared 
to debug the problem of container assigning.There are too many records like 
this in yarn-resourcemanager.log, and it's difficiult for me to directly to 
found the important info.
{code}
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:52,971 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:52,976 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:52,981 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:52,986 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:52,991 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:52,996 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,001 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,007 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,012 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,017 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,022 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,027 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,032 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,038 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,050 DEBUG 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Node 
offered to app: application_1449458968698_0011 reserved: false
2016-02-21 16:31:53,057 DEBUG
{code}
The reason why of so many records is that it always print this info first in 
container assigning whether the assigned result is successful or failed.
Can see the complete yarn log in updated log, and you can see how many records 
there are.
And in addition, too many these info logging will slow down process of 
container assigning.Maybe we should change this logLevel to other level, like 
{{trace}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4709) Exception while fetching all log files using yarn logs -am command and unusable JSON produced for containerLogFiles

2016-02-21 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4709:
---
Attachment: YARN-4709.01.patch

> Exception while fetching all log files using yarn logs -am command and 
> unusable JSON produced for containerLogFiles
> ---
>
> Key: YARN-4709
> URL: https://issues.apache.org/jira/browse/YARN-4709
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-4709.01.patch
>
>
> Following exception is thrown when we run below command.
> {panel}
> root@varun-Inspiron-5558:/opt1/hadoop3/bin# ./yarn logs -applicationId 
> application_1455999168135_0002 -am ALL -logFiles ALL
> Container: container_e31_1455999168135_0002_01_01
> ===
> {color:red}LogType:syslogstderrstdout
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> java.lang.Exception: Cannot find this log on the local disk.
> End of LogType:syslogstderrstdout{color}
> LogType:syslog
> Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
> Log Contents:
> 2016-02-21 01:44:49,565 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1455999168135_0002_01
> 2016-02-21 01:44:49,914 INFO \[main\] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: 
> /
> {panel}
> This is because we annotate containerLogFiles list with XmlElementWrapper 
> which generates XML output as under. And when we read this XML at client 
> side, reading the value associated with containerLogFiles also leads to one 
> value being syslogstderrstdout because both parent and child tags are same. 
> This leads to the exception. 
> {noformat}
> 
>   syslog
>   stderr
>   stdout
> 
> {noformat}
> Moreover, as we use XMLElementWrapper, the JSON generated is as under. This 
> JSON cannot be properly parsed by JSON parser(as a list). This is because 
> child containerLogsFiles entries are treated as a key-value pair(map) and 
> hence only last entry i.e. stdout is picked up. This was found while working 
> on YARN-4517. This makes output unusable. 
> This will be an issue for 2 REST endpoints i.e. {{/ws/v1/node/containers}} 
> and {{/ws/v1/node/containers/\{\{containerId\}\}}}
> {noformat}
>   "containerLogFiles":[
> {
>   "containerLogFiles":"syslog",
>   "containerLogFiles":"stderr",
>   "containerLogFiles":"stdout"
> }
>   ]
> {noformat}
> Ideally the JSON output should be as under.
> {noformat}
> "containerLogFiles":["syslog","stderr","stdout"]
> {noformat}
> We can indicate in the JAXB context to ignore the outer wrapper while 
> marshalling to JSON. But this can only be done at class level. If we do so 
> for ContainerInfo, it would break backward compatibility.
> Hence, to fix it we can remove XmlElementWrapper annotation for 
> containerLogFiles list.
> Another solution would be to wrap the list inside another class.
> But going with former as of now as we do not specify XmlElementWrapper for 
> lists at most of the places in our code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4709) Exception while fetching all log files using yarn logs -am command and unusable JSON produced for containerLogFiles

2016-02-21 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-4709:
--

 Summary: Exception while fetching all log files using yarn logs 
-am command and unusable JSON produced for containerLogFiles
 Key: YARN-4709
 URL: https://issues.apache.org/jira/browse/YARN-4709
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena
Assignee: Varun Saxena


Following exception is thrown when we run below command.
{panel}
root@varun-Inspiron-5558:/opt1/hadoop3/bin# ./yarn logs -applicationId 
application_1455999168135_0002 -am ALL -logFiles ALL


Container: container_e31_1455999168135_0002_01_01
===
{color:red}LogType:syslogstderrstdout
Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
Log Contents:
java.lang.Exception: Cannot find this log on the local disk.
End of LogType:syslogstderrstdout{color}
LogType:syslog
Log Upload Time:Sun Feb 21 01:44:55 +0530 2016
Log Contents:
2016-02-21 01:44:49,565 INFO \[main\] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
application appattempt_1455999168135_0002_01
2016-02-21 01:44:49,914 INFO \[main\] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: 
/
{panel}

This is because we annotate containerLogFiles list with XmlElementWrapper which 
generates XML output as under. And when we read this XML at client side, 
reading the value associated with containerLogFiles also leads to one value 
being syslogstderrstdout because both parent and child tags are same. This 
leads to the exception. 
{noformat}

  syslog
  stderr
  stdout

{noformat}

Moreover, as we use XMLElementWrapper, the JSON generated is as under. This 
JSON cannot be properly parsed by JSON parser(as a list). This is because child 
containerLogsFiles entries are treated as a key-value pair(map) and hence only 
last entry i.e. stdout is picked up. This was found while working on YARN-4517. 
This makes output unusable. 
This will be an issue for 2 REST endpoints i.e. {{/ws/v1/node/containers}} and 
{{/ws/v1/node/containers/\{\{containerId\}\}}}
{noformat}
  "containerLogFiles":[
{
  "containerLogFiles":"syslog",
  "containerLogFiles":"stderr",
  "containerLogFiles":"stdout"
}
  ]
{noformat}

Ideally the JSON output should be as under.
{noformat}
"containerLogFiles":["syslog","stderr","stdout"]
{noformat}

We can indicate in the JAXB context to ignore the outer wrapper while 
marshalling to JSON. But this can only be done at class level. If we do so for 
ContainerInfo, it would break backward compatibility.
Hence, to fix it we can remove XmlElementWrapper annotation for 
containerLogFiles list.
Another solution would be to wrap the list inside another class.

But going with former as of now as we do not specify XmlElementWrapper for 
lists at most of the places in our code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4708) Missing default mapper type in TimelineServer performance test tool usage

2016-02-21 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15155948#comment-15155948
 ] 

Kai Sasaki commented on YARN-4708:
--

I also updated YARN documentation and fixed some typos around timelineserver 
performance tool.

> Missing default mapper type in TimelineServer performance test tool usage
> -
>
> Key: YARN-4708
> URL: https://issues.apache.org/jira/browse/YARN-4708
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-4708.01.patch
>
>
> TimelineServer performance test tool uses SimpleEntityWriter as default 
> mapper. It can be indicated explicitly in usage of the tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4708) Missing default mapper type in TimelineServer performance test tool usage

2016-02-21 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-4708:
-
Attachment: YARN-4708.01.patch

> Missing default mapper type in TimelineServer performance test tool usage
> -
>
> Key: YARN-4708
> URL: https://issues.apache.org/jira/browse/YARN-4708
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: timelineserver
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: YARN-4708.01.patch
>
>
> TimelineServer performance test tool uses SimpleEntityWriter as default 
> mapper. It can be indicated explicitly in usage of the tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4708) Missing default mapper type in TimelineServer performance test tool usage

2016-02-21 Thread Kai Sasaki (JIRA)
Kai Sasaki created YARN-4708:


 Summary: Missing default mapper type in TimelineServer performance 
test tool usage
 Key: YARN-4708
 URL: https://issues.apache.org/jira/browse/YARN-4708
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: timelineserver
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Minor


TimelineServer performance test tool uses SimpleEntityWriter as default mapper. 
It can be indicated explicitly in usage of the tool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)