[jira] [Updated] (YARN-5097) NPE in Separator.joinEncoded()

2016-05-19 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5097:
-
Attachment: YARN-5097-YARN-2928.02.patch

Uploading patch v2 with unit tests added

> NPE in Separator.joinEncoded()
> --
>
> Key: YARN-5097
> URL: https://issues.apache.org/jira/browse/YARN-5097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5097-YARN-2928.01.patch, 
> YARN-5097-YARN-2928.02.patch
>
>
> Both in the RM log and the NM log, I see the following exception thrown. 
> First for RM,
> {noformat}
> 2016-05-16 14:19:29,930 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}
> In the NM log, I see a similar exception:
> {noformat}
> 2016-05-16 14:54:23,116 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5097) NPE in Separator.joinEncoded()

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292736#comment-15292736
 ] 

Hadoop QA commented on YARN-5097:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
51s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805127/YARN-5097-YARN-2928.01.patch
 |
| JIRA Issue | YARN-5097 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7c8600f4f9cb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-19 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292696#comment-15292696
 ] 

Joep Rottinghuis commented on YARN-5109:


I have an almost working prototype where we can combined readResults and 
readResultsHavingCompoundColumnQualifiers methods. It also fixes a case where 
we do not properly decode spaces in a column name.
Aside from that it removes the (false) assumptions that if a prefix is null 
then the column qualifiers cannot be compound.
Column qualifiers being compound or not are separate from prefixes being null 
or not.

Also removing the cross-dependency between Separator and TimelineStorageUtils.
Will upload patch (in whatever state I have it) when I log off for tonight.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5024) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers random failure

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292678#comment-15292678
 ] 

Hadoop QA commented on YARN-5024:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 38s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805123/0006-YARN-5024.patch |
| JIRA Issue | YARN-5024 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 065fd81d9736 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21d2b90 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11583/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11583/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11583/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11583/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was 

[jira] [Updated] (YARN-5097) NPE in Separator.joinEncoded()

2016-05-19 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5097:
-
Attachment: YARN-5097-YARN-2928.01.patch


Uploading patch v1. 

> NPE in Separator.joinEncoded()
> --
>
> Key: YARN-5097
> URL: https://issues.apache.org/jira/browse/YARN-5097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5097-YARN-2928.01.patch
>
>
> Both in the RM log and the NM log, I see the following exception thrown. 
> First for RM,
> {noformat}
> 2016-05-16 14:19:29,930 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}
> In the NM log, I see a similar exception:
> {noformat}
> 2016-05-16 14:54:23,116 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5116) Failed to execute "yarn application"

2016-05-19 Thread Jun Gong (JIRA)
Jun Gong created YARN-5116:
--

 Summary: Failed to execute "yarn application"
 Key: YARN-5116
 URL: https://issues.apache.org/jira/browse/YARN-5116
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jun Gong
Assignee: Jun Gong


Use the trunk code.
{code}
$ bin/yarn application -list
16/05/20 11:35:45 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.commons.cli.UnrecognizedOptionException: 
Unrecognized option: -list
at org.apache.commons.cli.Parser.processOption(Parser.java:363)
at org.apache.commons.cli.Parser.parse(Parser.java:199)
at org.apache.commons.cli.Parser.parse(Parser.java:85)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:172)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:90)
{code}

It is cause by that the subcommand 'application' is deleted from command args. 
The following command is OK.
{code}
$ bin/yarn application application -list
16/05/20 11:39:35 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Total number of applications (application-types: [] and states: [SUBMITTED, 
ACCEPTED, RUNNING]):0
Application-Id  Application-NameApplication-Type
  User   Queue   State Final-State  
   ProgressTracking-URL
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5024) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers random failure

2016-05-19 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292632#comment-15292632
 ] 

Bibin A Chundatt commented on YARN-5024:


Uploaded patch after using default sleep

> TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers 
> random failure
> ---
>
> Key: YARN-5024
> URL: https://issues.apache.org/jira/browse/YARN-5024
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-5024.patch, 0002-YARN-5024.patch, 
> 0003-YARN-5024.patch, 0004-YARN-5024.patch, 0005-YARN-5024.patch, 
> 0006-YARN-5024.patch
>
>
> Random Testcase failure for 
> {{TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers}}
> {noformat}
> java.lang.AssertionError: Unexcpected MemorySeconds value 
> expected:<-1497214794931> but was:<1913>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.amRestartTests(TestContainerResourceUsage.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers(TestContainerResourceUsage.java:252)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5068) Expose scheduler queue to application master

2016-05-19 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292630#comment-15292630
 ] 

Rohith Sharma K S commented on YARN-5068:
-

backported to 2.8.. Thanks!!

> Expose scheduler queue to application master
> 
>
> Key: YARN-5068
> URL: https://issues.apache.org/jira/browse/YARN-5068
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6692.patch, YARN-5068-branch-2.1.patch, 
> YARN-5068.1.patch, YARN-5068.2.patch
>
>
> The AM needs to know the queue name in which it was launched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5024) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers random failure

2016-05-19 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5024:
---
Attachment: 0006-YARN-5024.patch

> TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers 
> random failure
> ---
>
> Key: YARN-5024
> URL: https://issues.apache.org/jira/browse/YARN-5024
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-5024.patch, 0002-YARN-5024.patch, 
> 0003-YARN-5024.patch, 0004-YARN-5024.patch, 0005-YARN-5024.patch, 
> 0006-YARN-5024.patch
>
>
> Random Testcase failure for 
> {{TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers}}
> {noformat}
> java.lang.AssertionError: Unexcpected MemorySeconds value 
> expected:<-1497214794931> but was:<1913>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.amRestartTests(TestContainerResourceUsage.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers(TestContainerResourceUsage.java:252)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5068) Expose scheduler queue to application master

2016-05-19 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5068:

Fix Version/s: (was: 2.9.0)
   2.8.0

> Expose scheduler queue to application master
> 
>
> Key: YARN-5068
> URL: https://issues.apache.org/jira/browse/YARN-5068
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6692.patch, YARN-5068-branch-2.1.patch, 
> YARN-5068.1.patch, YARN-5068.2.patch
>
>
> The AM needs to know the queue name in which it was launched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292603#comment-15292603
 ] 

Hadoop QA commented on YARN-5112:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 2 new + 36 unchanged - 5 fixed = 38 total (was 41) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 41s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 57s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805111/YARN-5112.6.patch |
| JIRA Issue | YARN-5112 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6dc9b231bf27 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21d2b90 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11582/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11582/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11582/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Excessive log warnings on NM recovery
> -
>
>

[jira] [Updated] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5112:
--
Attachment: YARN-5112.6.patch

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.4.patch, 
> YARN-5112.5.patch, YARN-5112.6.patch, YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292561#comment-15292561
 ] 

Hadoop QA commented on YARN-5077:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 36s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 2s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804740/YARN-5077.003.patch |
| JIRA Issue | YARN-5077 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4ede70affbdb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 42c22f7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11581/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11581/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11581/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11581/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292523#comment-15292523
 ] 

Junping Du commented on YARN-5112:
--

bq. it's no straightforward to test the log message occurence, given it's just 
a log message, i think no unit test is fine.
Sure. Then only fix checkstyle issue should be fine to me.

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.4.patch, 
> YARN-5112.5.patch, YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4866) FairScheduler: AMs can consume all vcores leading to a livelock when using FAIR policy

2016-05-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292510#comment-15292510
 ] 

Yufei Gu commented on YARN-4866:


All test failures are unrelated.

> FairScheduler: AMs can consume all vcores leading to a livelock when using 
> FAIR policy
> --
>
> Key: YARN-4866
> URL: https://issues.apache.org/jira/browse/YARN-4866
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-4866.001.patch, YARN-4866.002.patch, 
> YARN-4866.003.patch, YARN-4866.004.patch, YARN-4866.005.patch, 
> YARN-4866.006.patch
>
>
> The maxAMShare uses the queue's policy for enforcing limits. When using FAIR 
> policy, this considers only memory. If there are fewer vcores on the cluster, 
> the AMs can end up taking all the vcores leading to a livelock. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292507#comment-15292507
 ] 

Hadoop QA commented on YARN-5018:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
59s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805099/YARN-5018-YARN-2928.002.patch
 |
| JIRA Issue | YARN-5018 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dfdbf71e134b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool 

[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292501#comment-15292501
 ] 

Hadoop QA commented on YARN-5112:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-5112 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805101/YARN-5112.5.patch |
| JIRA Issue | YARN-5112 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11579/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.4.patch, 
> YARN-5112.5.patch, YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5098) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token

2016-05-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5098:
--
Attachment: YARN-5098.1.patch

> Yarn Application log Aggreagation fails due to NM can not get correct HDFS 
> delegation token
> ---
>
> Key: YARN-5098
> URL: https://issues.apache.org/jira/browse/YARN-5098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-5098.1.patch
>
>
> Environment : HA cluster
> Yarn application logs for long running application could not be gathered 
> because Nodemanager failed to talk to HDFS with below error.
> {code}
> 2016-05-16 18:18:28,533 INFO  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:finishLogAggregation(555)) - Application just 
> finished : application_1463170334122_0002
> 2016-05-16 18:18:28,545 WARN  ipc.Client (Client.java:run(705)) - Exception 
> encountered while connecting to the server :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 171 for hrt_qa) can't be found in cache
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:583)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:398)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:752)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:748)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:747)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:398)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1597)
> at org.apache.hadoop.ipc.Client.call(Client.java:1439)
> at org.apache.hadoop.ipc.Client.call(Client.java:1386)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:240)
> at com.sun.proxy.$Proxy83.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy84.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:1018)
> at org.apache.hadoop.fs.Hdfs.getServerDefaults(Hdfs.java:156)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:550)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:687)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5097) NPE in Separator.joinEncoded()

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292498#comment-15292498
 ] 

Sangjin Lee commented on YARN-5097:
---

I suspect this is indeed related with YARN-5018. The entire context is not 
filled in until a little later, which is causing the NPE for the first run of 
the app level aggregation. We should test this again after YARN-5018 is fixed.

> NPE in Separator.joinEncoded()
> --
>
> Key: YARN-5097
> URL: https://issues.apache.org/jira/browse/YARN-5097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> Both in the RM log and the NM log, I see the following exception thrown. 
> First for RM,
> {noformat}
> 2016-05-16 14:19:29,930 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}
> In the NM log, I see a similar exception:
> {noformat}
> 2016-05-16 14:54:23,116 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292496#comment-15292496
 ] 

Hadoop QA commented on YARN-5113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 38s {color} | 
{color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 38s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 12 new + 
453 unchanged - 55 fixed = 465 total (was 508) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 17s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 118 new + 1169 unchanged - 104 fixed = 1287 total (was 1273) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 17s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 24 new + 585 unchanged - 7 fixed = 609 total (was 592) {color} 

[jira] [Updated] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5112:
--
Attachment: YARN-5112.5.patch

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.4.patch, 
> YARN-5112.5.patch, YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4866) FairScheduler: AMs can consume all vcores leading to a livelock when using FAIR policy

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292479#comment-15292479
 ] 

Hadoop QA commented on YARN-4866:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 0 new + 212 unchanged - 1 fixed = 212 total (was 213) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 31s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805079/YARN-4866.006.patch |
| JIRA Issue | YARN-4866 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c5418cd73084 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 42c22f7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11574/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11574/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11574/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-5111) YARN container system metrics are not aggregated to application

2016-05-19 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292478#comment-15292478
 ] 

Li Lu commented on YARN-5111:
-

Thanks for the info [~sjlee0]! Probably some wire-up issues in the collector 
code. [~Naganarasimha] please feel free to let me know if you need any help 
here. Thanks! 

> YARN container system metrics are not aggregated to application
> ---
>
> Key: YARN-5111
> URL: https://issues.apache.org/jira/browse/YARN-5111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> It appears that the container system metrics (CPU and memory) are not being 
> aggregated onto the application.
> I definitely see container system metrics when I query for YARN_CONTAINER. 
> However, there is no corresponding metrics on the parent application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-19 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5018:

Attachment: YARN-5018-YARN-2928.002.patch

New patch applies to the latest YARN-2928 branch. 

> Online aggregation logic should not run immediately after collectors got 
> started
> 
>
> Key: YARN-5018
> URL: https://issues.apache.org/jira/browse/YARN-5018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5018-YARN-2928.001.patch, 
> YARN-5018-YARN-2928.002.patch
>
>
> In app level collector, we launch the aggregation logic immediately after the 
> collector got started. However, at this time, important context data has yet 
> to be published to the container. Also, if the aggregation result is empty, 
> we do not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-05-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5113:
--
Description: This JIRA focuses on the refactoring of classes related to 
Distributed Scheduling  (was: This JIRA focuses on the refactoring of classes 
related to YARN-2885.)

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch
>
>
> This JIRA focuses on the refactoring of classes related to Distributed 
> Scheduling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-05-19 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5113:
-
Description: This JIRA focuses on the refactoring of classes related to 
YARN-2885.  (was: This JIRA focuses on the refactoring of classes related to 
YARN-2877.)

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch
>
>
> This JIRA focuses on the refactoring of classes related to YARN-2885.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292448#comment-15292448
 ] 

Hadoop QA commented on YARN-5112:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 3 new + 37 unchanged - 5 fixed = 40 total (was 42) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 32s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805083/YARN-5112.4.patch |
| JIRA Issue | YARN-5112 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6afb856bc630 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 42c22f7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11575/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11575/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11575/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Excessive log warnings on NM recovery
> -
>
>

[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-05-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292450#comment-15292450
 ] 

Arun Suresh commented on YARN-5113:
---

[~kkaranasos], the changes to {{MiniYarnCluster}} can be reverted.

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch
>
>
> This JIRA focuses on the refactoring of classes related to YARN-2877.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-05-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5113:
--
Summary: Refactoring and other clean-up for distributed scheduling  (was: 
Refactoring and other clean-up for distributed scheduling interceptor)

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch
>
>
> This JIRA focuses on the refactoring of classes related to YARN-2877.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292446#comment-15292446
 ] 

Sangjin Lee commented on YARN-5018:
---

I don't think it is a band-aid. I think it is more than acceptable to delay the 
execution and expect the initialization would be complete by then.

> Online aggregation logic should not run immediately after collectors got 
> started
> 
>
> Key: YARN-5018
> URL: https://issues.apache.org/jira/browse/YARN-5018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5018-YARN-2928.001.patch
>
>
> In app level collector, we launch the aggregation logic immediately after the 
> collector got started. However, at this time, important context data has yet 
> to be published to the container. Also, if the aggregation result is empty, 
> we do not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5111) YARN container system metrics are not aggregated to application

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292442#comment-15292442
 ] 

Sangjin Lee commented on YARN-5111:
---

I saw this with a sleep job that ran for 5 minutes. I also checked that there 
were a large number of values for CPU and memory at the container level. So 
that is probably not the cause.

> YARN container system metrics are not aggregated to application
> ---
>
> Key: YARN-5111
> URL: https://issues.apache.org/jira/browse/YARN-5111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> It appears that the container system metrics (CPU and memory) are not being 
> aggregated onto the application.
> I definitely see container system metrics when I query for YARN_CONTAINER. 
> However, there is no corresponding metrics on the parent application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5068) Expose scheduler queue to application master

2016-05-19 Thread Harish Jaiprakash (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292439#comment-15292439
 ] 

Harish Jaiprakash commented on YARN-5068:
-

Thanks Rohith. Is it possible to get this patch on 2.8.0 release?

> Expose scheduler queue to application master
> 
>
> Key: YARN-5068
> URL: https://issues.apache.org/jira/browse/YARN-5068
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
> Fix For: 2.9.0
>
> Attachments: MAPREDUCE-6692.patch, YARN-5068-branch-2.1.patch, 
> YARN-5068.1.patch, YARN-5068.2.patch
>
>
> The AM needs to know the queue name in which it was launched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292438#comment-15292438
 ] 

Hadoop QA commented on YARN-5018:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 4m 6s {color} 
| {color:red} Docker failed to build yetus/hadoop:cf2ee45. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805088/YARN-5018-YARN-2928.001.patch
 |
| JIRA Issue | YARN-5018 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11576/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Online aggregation logic should not run immediately after collectors got 
> started
> 
>
> Key: YARN-5018
> URL: https://issues.apache.org/jira/browse/YARN-5018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5018-YARN-2928.001.patch
>
>
> In app level collector, we launch the aggregation logic immediately after the 
> collector got started. However, at this time, important context data has yet 
> to be published to the container. Also, if the aggregation result is empty, 
> we do not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4006) YARN ATS Alternate Kerberos HTTP Authentication Changes

2016-05-19 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292429#comment-15292429
 ] 

Larry McCay commented on YARN-4006:
---

Hi [~aw] - I am familiar with AltKerberosAuthenticationHandler if that is what 
you mean.
This patch looks to be perfectly sane for enabling custom authentication 
handlers - AltKerberos obviously interrogates the UserAgent and muxes between 
kerberos and the customer handler.

There isn't anything that is specific to AltKeberos about this patch but such 
extensions should work fine.

What is the question on this patch - it seems rather simple?

> YARN ATS Alternate Kerberos HTTP Authentication Changes
> ---
>
> Key: YARN-4006
> URL: https://issues.apache.org/jira/browse/YARN-4006
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, timelineserver
>Affects Versions: 2.5.0, 2.6.0, 2.7.0, 2.5.1, 2.6.1, 2.8.0, 2.7.1, 2.7.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Blocker
> Attachments: YARN-4006-branch-trunk.patch, 
> YARN-4006-branch2.6.0.patch, sample-ats-alt-auth.patch
>
>
> When attempting to use The Hadoop Alternate Authentication Classes. They do 
> not exactly work with what was built with YARN-1935.
> I went ahead and made the following changes to support using a Custom 
> AltKerberos DelegationToken custom class.
> Changes to: TimelineAuthenticationFilterInitializer.class
> {code}
>String authType = filterConfig.get(AuthenticationFilter.AUTH_TYPE);
> LOG.info("AuthType Configured: "+authType);
> if (authType.equals(PseudoAuthenticationHandler.TYPE)) {
>   filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   PseudoDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: PseudoDelegationTokenAuthenticationHandler");
> } else if (authType.equals(KerberosAuthenticationHandler.TYPE) || 
> (UserGroupInformation.isSecurityEnabled() && 
> conf.get("hadoop.security.authentication").equals(KerberosAuthenticationHandler.TYPE)))
>  {
>   if (!(authType.equals(KerberosAuthenticationHandler.TYPE))) {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   authType);
> LOG.info("AuthType: "+authType);
>   } else {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   KerberosDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: KerberosDelegationTokenAuthenticationHandler");
>   } 
>   // Resolve _HOST into bind address
>   String bindAddress = conf.get(HttpServer2.BIND_ADDRESS);
>   String principal =
>   filterConfig.get(KerberosAuthenticationHandler.PRINCIPAL);
>   if (principal != null) {
> try {
>   principal = SecurityUtil.getServerPrincipal(principal, bindAddress);
> } catch (IOException ex) {
>   throw new RuntimeException(
>   "Could not resolve Kerberos principal name: " + ex.toString(), 
> ex);
> }
> filterConfig.put(KerberosAuthenticationHandler.PRINCIPAL,
> principal);
>   }
> }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling interceptor

2016-05-19 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5113:
-
Attachment: YARN-5113.002.patch

Attaching new version of the patch.

> Refactoring and other clean-up for distributed scheduling interceptor
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch
>
>
> This JIRA focuses on the refactoring of classes related to YARN-2877.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-19 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5018:

Attachment: YARN-5018-YARN-2928.001.patch

OK let me do a first-aid style fix to this problem. The problem occurs because 
we tried to launch the first aggregation call immediately after the collector 
got started. However, some context information is not set until postPut. This 
would cause the aggregation method sees null values within the context.

This fix is a "first-aid" style fix because I simply introduced a wait (15 
secs) between the start of a collector and the first aggregation call. A more 
comprehensive fix would introduce a callback function from the collector 
manager to collectors so that we can tell the collectors all context data 
should be ready. Let me know if you have a strong preference on that one. 
Thanks! 

> Online aggregation logic should not run immediately after collectors got 
> started
> 
>
> Key: YARN-5018
> URL: https://issues.apache.org/jira/browse/YARN-5018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5018-YARN-2928.001.patch
>
>
> In app level collector, we launch the aggregation logic immediately after the 
> collector got started. However, at this time, important context data has yet 
> to be published to the container. Also, if the aggregation result is empty, 
> we do not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling interceptor

2016-05-19 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5113:
-
Summary: Refactoring and other clean-up for distributed scheduling 
interceptor  (was: Refactoring and other clean-up for YARN-2885)

> Refactoring and other clean-up for distributed scheduling interceptor
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch
>
>
> This JIRA focuses on the refactoring of classes related to YARN-2877.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for YARN-2885

2016-05-19 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5113:
-
Summary: Refactoring and other clean-up for YARN-2885  (was: Refactoring 
and other clean-up for YARN-2877)

> Refactoring and other clean-up for YARN-2885
> 
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch
>
>
> This JIRA focuses on the refactoring of classes related to YARN-2877.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5050) Code cleanup for TestDistributedShell

2016-05-19 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292421#comment-15292421
 ] 

Li Lu commented on YARN-5050:
-

Sorry I mean, a +1 from Jenkins... I'm not trying to +1 my own patch...

> Code cleanup for TestDistributedShell
> -
>
> Key: YARN-5050
> URL: https://issues.apache.org/jira/browse/YARN-5050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5050-YARN-2928.001.patch, 
> YARN-5050-YARN-2928.002.patch
>
>
> We introduced some small errors after yesterday's rebase. Also, some timeout 
> settings for timeline v2 tests are deprecated since we introduced global time 
> out settings in YARN-4545. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5111) YARN container system metrics are not aggregated to application

2016-05-19 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292416#comment-15292416
 ] 

Li Lu commented on YARN-5111:
-

Here I think I forgot one thing which could be the cause. I forgot to send the 
final aggregation data when we shut down the collector. So, for jobs running 
less than a few seconds, it is quite possible that the first aggregation call 
fails due to the NPE introduced in YARN-5018, and the second call is not yet 
there. We should enforce an update at the end of the 
AppLevelTimelineCollector's life cycle to get all required data. 

> YARN container system metrics are not aggregated to application
> ---
>
> Key: YARN-5111
> URL: https://issues.apache.org/jira/browse/YARN-5111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> It appears that the container system metrics (CPU and memory) are not being 
> aggregated onto the application.
> I definitely see container system metrics when I query for YARN_CONTAINER. 
> However, there is no corresponding metrics on the parent application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292414#comment-15292414
 ] 

Hadoop QA commented on YARN-1942:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 34 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 9s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 2s 
{color} | {color:red} root: patch generated 110 new + 2955 unchanged - 33 fixed 
= 3065 total (was 2988) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
25s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 30s {color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 20s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 50s {color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 12s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 10s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 16s {color} 
| {color:red} hadoop-yarn-server-timeline-pluginstorage in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 15s {color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s {color} 
| {color:red} hadoop-mapreduce-client-core in the patch 

[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292408#comment-15292408
 ] 

Jian He commented on YARN-5112:
---

Fixing the checkstyle, it's no straightforward to test the log message 
occurence,  given it's just a log message, i think no unit test is fine..

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.4.patch, 
> YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5050) Code cleanup for TestDistributedShell

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292411#comment-15292411
 ] 

Sangjin Lee commented on YARN-5050:
---

+1. I'll commit it shortly.

> Code cleanup for TestDistributedShell
> -
>
> Key: YARN-5050
> URL: https://issues.apache.org/jira/browse/YARN-5050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5050-YARN-2928.001.patch, 
> YARN-5050-YARN-2928.002.patch
>
>
> We introduced some small errors after yesterday's rebase. Also, some timeout 
> settings for timeline v2 tests are deprecated since we introduced global time 
> out settings in YARN-4545. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5050) Code cleanup for TestDistributedShell

2016-05-19 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292410#comment-15292410
 ] 

Li Lu commented on YARN-5050:
-

+1, finally! 

> Code cleanup for TestDistributedShell
> -
>
> Key: YARN-5050
> URL: https://issues.apache.org/jira/browse/YARN-5050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5050-YARN-2928.001.patch, 
> YARN-5050-YARN-2928.002.patch
>
>
> We introduced some small errors after yesterday's rebase. Also, some timeout 
> settings for timeline v2 tests are deprecated since we introduced global time 
> out settings in YARN-4545. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5050) Code cleanup for TestDistributedShell

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292407#comment-15292407
 ] 

Hadoop QA commented on YARN-5050:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
33s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 50s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 8s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805075/YARN-5050-YARN-2928.002.patch
 |
| JIRA Issue | YARN-5050 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ed4a9b37e42d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | 

[jira] [Commented] (YARN-5024) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers random failure

2016-05-19 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292405#comment-15292405
 ] 

Yufei Gu commented on YARN-5024:


Thanks [~bibinchundatt] for working on this.  Looks good to me. 

A nit: 
{code}
rm.waitForState(am1.getApplicationAttemptId(), RMAppAttemptState.RUNNING,
2 * 1000);
{code}
Is there a special reason to specify the timeout? If not, the default one 
should be good here.

> TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers 
> random failure
> ---
>
> Key: YARN-5024
> URL: https://issues.apache.org/jira/browse/YARN-5024
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-5024.patch, 0002-YARN-5024.patch, 
> 0003-YARN-5024.patch, 0004-YARN-5024.patch, 0005-YARN-5024.patch
>
>
> Random Testcase failure for 
> {{TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers}}
> {noformat}
> java.lang.AssertionError: Unexcpected MemorySeconds value 
> expected:<-1497214794931> but was:<1913>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.amRestartTests(TestContainerResourceUsage.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers(TestContainerResourceUsage.java:252)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5112:
--
Attachment: YARN-5112.4.patch

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.4.patch, 
> YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292399#comment-15292399
 ] 

Hadoop QA commented on YARN-4844:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 65 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 13s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 44s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 21s 
{color} | {color:red} root generated 3 new + 698 unchanged - 0 fixed = 701 
total (was 698) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 44s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 35s 
{color} | {color:red} root: patch generated 218 new + 5517 unchanged - 176 
fixed = 5735 total (was 5693) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
26s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 38s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 37s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 4s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 15s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 2s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s {color} 
| {color:red} hadoop-mapreduce-client-common in the patch failed. {color} |
| 

[jira] [Commented] (YARN-5050) Code cleanup for TestDistributedShell

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292379#comment-15292379
 ] 

Sangjin Lee commented on YARN-5050:
---

Thanks for the explanation. I'll wait for the jenkins run.

> Code cleanup for TestDistributedShell
> -
>
> Key: YARN-5050
> URL: https://issues.apache.org/jira/browse/YARN-5050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5050-YARN-2928.001.patch, 
> YARN-5050-YARN-2928.002.patch
>
>
> We introduced some small errors after yesterday's rebase. Also, some timeout 
> settings for timeline v2 tests are deprecated since we introduced global time 
> out settings in YARN-4545. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292378#comment-15292378
 ] 

Junping Du commented on YARN-5112:
--

Thanks Jian for the patch. Latest patch LGTM. However, can we add a unit test 
in TestLogAggregationService to verify that when root dir permission issue 
happens, only 1 warn message there?

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4866) FairScheduler: AMs can consume all vcores leading to a livelock when using FAIR policy

2016-05-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4866:
---
Attachment: YARN-4866.006.patch

The patch 006 fixed the style issues and the bug, add some test code to test if 
AM VCores are more than maxResources of queue.

> FairScheduler: AMs can consume all vcores leading to a livelock when using 
> FAIR policy
> --
>
> Key: YARN-4866
> URL: https://issues.apache.org/jira/browse/YARN-4866
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
> Attachments: YARN-4866.001.patch, YARN-4866.002.patch, 
> YARN-4866.003.patch, YARN-4866.004.patch, YARN-4866.005.patch, 
> YARN-4866.006.patch
>
>
> The maxAMShare uses the queue's policy for enforcing limits. When using FAIR 
> policy, this considers only memory. If there are fewer vcores on the cluster, 
> the AMs can end up taking all the vcores leading to a livelock. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292349#comment-15292349
 ] 

Karthik Kambatla commented on YARN-4464:


bq. The patch is changing the config for max number of completed apps RM keeps 
in memory, NOT in store. Am I missing something ?
Thanks for catching that, [~jianhe]. My bad, completely missed that. 

bq. why do you think it's better to not let users see the historical apps ? I 
feel it is sometimes convenient to see the finished apps after restart.
I agree that it is convenient to see finished apps. The current default of 
10,000 is too large and we have seen issues with recovery due to the heavy load 
on ZK. If we pick a smaller value that is not zero, it is true that users will 
continue to see some finished apps. After upgrade, it is also quite possible 
that users don't realize the number of jobs stored on restart has been lowered; 
if they rely on the RM storing these, they might be in for a surprise later on. 
Picking zero should expose this change in behavior in any test cluster as well, 
and they could be a number appropriate for them. Long story short, I agree that 
a number like 1000 might have been a good default to begin with. Now that we 
are lowering, zero will be more in the face and avoid delayed surprises. 

I am not wedded to this and am open to persuaded to a different default value. 

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5050) Code cleanup for TestDistributedShell

2016-05-19 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5050:

Attachment: YARN-5050-YARN-2928.002.patch

002 patch to fix the intermittent UT failure. I changed the async dispatcher 
draining wait time to be a smaller value so that we can perform teardown in a 
faster fashion. 

[~sjlee0], about you concerns, I removed the two lines about 
SYSTEM_METRICS_PUBLISHER_ENABLED because they've already been enabled in trunk. 
Also, in YARN-4545 we introduced a global timeout setting for all UTs in 
TestDistributedShell. Therefore, we no longer need to set the timeout for each 
one of them. 

> Code cleanup for TestDistributedShell
> -
>
> Key: YARN-5050
> URL: https://issues.apache.org/jira/browse/YARN-5050
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5050-YARN-2928.001.patch, 
> YARN-5050-YARN-2928.002.patch
>
>
> We introduced some small errors after yesterday's rebase. Also, some timeout 
> settings for timeline v2 tests are deprecated since we introduced global time 
> out settings in YARN-4545. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292288#comment-15292288
 ] 

Hadoop QA commented on YARN-5112:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 6 new + 36 unchanged - 5 fixed = 42 total (was 41) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 20s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805066/YARN-5112.3.patch |
| JIRA Issue | YARN-5112 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e7eb7553c1b5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 22fcd81 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11572/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11572/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11572/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Excessive log warnings on NM recovery
> -
>
> 

[jira] [Commented] (YARN-5096) timelinereader has a lot of logging that's not useful

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292278#comment-15292278
 ] 

Sangjin Lee commented on YARN-5096:
---

Committed it:

{noformat}
commit 8d0b4267e4516e7c5d39d23121f17f508234a1b2
Author: Sangjin Lee 
Date:   Thu May 19 15:40:15 2016 -0700

YARN-5096 addendum. Turned another logging statement to debug. Contributed 
by Sangjin Lee.
{noformat}

> timelinereader has a lot of logging that's not useful
> -
>
> Key: YARN-5096
> URL: https://issues.apache.org/jira/browse/YARN-5096
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
>  Labels: yarn-2928-1st-milestone
> Fix For: YARN-2928
>
> Attachments: YARN-5096-YARN-2928.01.patch
>
>
> After running about a dozen or so requests, the timelinereader log is filled 
> with the following logging entries:
> {noformat}
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> {noformat}
> There were some ~ 3,000 such logging entries. It's too excessive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4766) NM should not aggregate logs older than the retention policy

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292277#comment-15292277
 ] 

Hadoop QA commented on YARN-4766:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 26 new + 
321 unchanged - 4 fixed = 347 total (was 325) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 40s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 5 new + 592 unchanged - 0 fixed = 597 total (was 592) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 16s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805063/yarn4766.005.patch |
| JIRA Issue | YARN-4766 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 0e205f9dd54f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 22fcd81 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5096) timelinereader has a lot of logging that's not useful

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292275#comment-15292275
 ] 

Sangjin Lee commented on YARN-5096:
---

Ah I missed that one. Thanks for letting me know. I'll make that DEBUG as well. 
Since it is a trivial change, I'll add an addendum commit.

> timelinereader has a lot of logging that's not useful
> -
>
> Key: YARN-5096
> URL: https://issues.apache.org/jira/browse/YARN-5096
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
>  Labels: yarn-2928-1st-milestone
> Fix For: YARN-2928
>
> Attachments: YARN-5096-YARN-2928.01.patch
>
>
> After running about a dozen or so requests, the timelinereader log is filled 
> with the following logging entries:
> {noformat}
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> {noformat}
> There were some ~ 3,000 such logging entries. It's too excessive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292270#comment-15292270
 ] 

Sangjin Lee commented on YARN-5109:
---

Another related issue I see is currently the event id and the info key are not 
encoded as they are joined. I think this is an opportunity to address that. 
Although it is rather unlikely the event id or info key would contain "=", but 
we should be safe than sorry...

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4878) Expose scheduling policy and max running apps over JMX for Yarn queues

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292268#comment-15292268
 ] 

Hadoop QA commented on YARN-4878:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 1 new + 52 unchanged - 0 fixed = 53 total (was 52) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 25s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805051/YARN-4878.004.patch |
| JIRA Issue | YARN-4878 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 05d5f14751de 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 22fcd81 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11570/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11570/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11570/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-5007) MiniYarnCluster contains deprecated constructor which is called by the other constructors

2016-05-19 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292245#comment-15292245
 ] 

John Zhuge commented on YARN-5007:
--

Like your idea of including MiniMRYarnCluster and MiniYarnCluster in this JIRA 
and MiniDFSCluster in another in HDFS project.

Another con of lumping everything in one patch, especially across components: 
much tougher to pass unit tests. Remember we struggled to overcome this 
obstacle in YARN-4994? :(

> MiniYarnCluster contains deprecated constructor which is called by the other 
> constructors
> -
>
> Key: YARN-5007
> URL: https://issues.apache.org/jira/browse/YARN-5007
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: timelineserver
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 2.8.0
>
>
> MiniYarnCluster has a deprecated constructor which is called by the other 
> constructors and it causes javac warnings during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5112:
--
Attachment: YARN-5112.3.patch

Thanks for the review, sounds good to me.
Uploaded a new patch to address the comments.

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5093) created time shows 0 in most REST output

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292233#comment-15292233
 ] 

Hadoop QA commented on YARN-5093:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
34s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} YARN-2928 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 14s 
{color} | {color:red} 
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 52s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 1s 
{color} | {color:green} 

[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292229#comment-15292229
 ] 

Sangjin Lee commented on YARN-5109:
---

Joep, Vrushali, and I discussed some more offline, and we realized that there 
can be a reasonable pattern we can extract based on the individual 
{{*RowKey.getRowKey()}} and {{*RowKey.parseRowKey()}}, and extend it to cover 
the compound column names also.

We could have an interface that encapsulates the concern of defining the key 
structure and how to encode/decode it. We would still need the new {{split()}} 
method to dictate the low-level parsing.

The following is a proof-of-concept pseudo code I put together in a hurry:

{code}
public interface CompoundKeyConverter {
  byte[] encodeKey(T t);
  T decodeKey(byte[] b);
  int[] sizesOfTokens(); // it's debatable whether this should be part of the 
interface
}

public class ApplicationRowKeyConverter
implements CompoundKeyConverter {
  public static final int[] SIZES =
  {0, 0, 0, Long.BYTES, Long.BYTES + Integer.BYTES};
  public int[] sizesOfTokens() {
return SIZES;
  }

  // almost identical to ApplicationRowKey.getRowKey()
  public byte[] encodeKey(ApplicationRowKey key) {
byte[] first =
Bytes.toBytes(Separator.QUALIFIERS.joinEncoded(key.getClusterId(),
key.getUserId(), key.getFlowName()));
// Note that flowRunId is a long, so we can't encode them all at the same
// time.
byte[] second =
Bytes.toBytes(TimelineStorageUtils.invertLong(key.getFlowRunId()));
byte[] third = TimelineStorageUtils.encodeAppId(key.getAppId());
return Separator.QUALIFIERS.join(first, second, third);
  }

  // almost identical to ApplicationRowKey.parseRowKey()
  public ApplicationRowKey decodeKey(byte[] data) {
// use the new version of the split method that takes the sizes
byte[][] rowKeyComponents =
Separator.QUALIFIERS.split(data, sizesOfTokens());

if (rowKeyComponents.length < 5) {
  throw new IllegalArgumentException("the row key is not valid for " +
  "an application");
}

String clusterId =
Separator.QUALIFIERS.decode(Bytes.toString(rowKeyComponents[0]));
String userId =
Separator.QUALIFIERS.decode(Bytes.toString(rowKeyComponents[1]));
String flowName =
Separator.QUALIFIERS.decode(Bytes.toString(rowKeyComponents[2]));
long flowRunId =
TimelineStorageUtils.invertLong(Bytes.toLong(rowKeyComponents[3]));
String appId = TimelineStorageUtils.decodeAppId(rowKeyComponents[4]);
return new ApplicationRowKey(clusterId, userId, flowName, flowRunId, appId);
  }
}

public class EventColumnName {
  private final String id;
  private final long timestamp;
  private final String infoKey;

  public EventColumnName(String id, long timestamp, String infoKey) {
this.id = id;
this.timestamp = timestamp;
this.infoKey = infoKey;
  }

  public String getId() {
return id;
  }

  public long getTimestamp() {
return timestamp;
  }

  public String getInfoKey() {
return infoKey;
  }
}

public class EventColumnNameConverter
implements CompoundKeyConverter {
  private static final int[] SIZES = {0, Long.BYTES, 0};
  public int[] sizesOfTokens() {
return SIZES;
  }

  // from HBaseTimelineWriterImpl.storeEvents()
  public byte[] encodeKey(EventColumnName q) {
return ColumnHelper.getCompoundColumnQualifierBytes(q.getId(),
Bytes.toBytes(TimelineStorageUtils.invertLong(q.getTimestamp())),
Bytes.toBytes(infoKey));
  }

  public EventColumnName decodeKey(byte[] data) {
// use the new split() method that takes the sizes
byte[][] components = Separator.VALUES.split(data, sizesOfTokens());
if (components.length != 3) {
  throw new IllegalArgumentException("the column name is not valid");
}

String id = Bytes.toString(components[0]);
long ts = TimelineStorageUtils.invertLong(Bytes.toLong(components[1]));
String infoKey = components[2].length == 0 ? null : 
Bytes.toString(components[2]);
return new EventColumnName(id, ts, infoKey);
  }
}

HBaseTimelineWriterImpl.java:
private void storeEvents(...) {
  ...
  CompoundKeyConverter converter =
  new EventColumnNameConverter();
  EventColumnName e =
  new EventColumnName(eventId, eventTimestamp, info.getKey());
  byte[] compoundColumnQualifierBytes = converter.encodeKey(e);
  ...
}

TimelineStorageUtils.java:
public static  void readEvents(...) {
  ...
  CompoundKeyConverter converter =
  new EventColumnNameConverter();
  // ColumnPrefix API should change as well
  Map eventsResult =
  prefix.readResultsHavingCompoundColumnQualifiers(result, converter);
  for (Map.Entry eventResult : eventsResult.entrySet()) 
{
...
  }
}

ColumnHelper.java:
public  Map readResultsHavingCompoundColumnQualifiers(Result 
result,

[jira] [Commented] (YARN-5007) MiniYarnCluster contains deprecated constructor which is called by the other constructors

2016-05-19 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292224#comment-15292224
 ] 

Andras Bokor commented on YARN-5007:


[~jzhuge]
This issue came up on YARN-4494.
Either {{MiniDFSCluster}} or {{MiniMRYarnCluster}} have the same problem . What 
do you think? Should they be fixed in one (this) JIRA or should I separate them 
into 2/3 JIRAs? I have either pros:
* Basically it is one problem but on individual classes
* Less work for committers
* Less administration
* {{MiniMRYarnCluster}} calls {{MiniYarnCluster}} so {{MiniMRYarnCluster}} 
needs to be changed anyway

or cons:
* Patch size bigger
* {{MiniDFSCluster}} belongs to HDFS and it is a YARN ticket
* {{MiniMRYarnCluster}} belongs to MAPREDUCE and it is a YARN ticket

How about include {{MiniMRYarnCluster}} and {{MiniYarnCluster}} in this JIRA 
and {{MiniDFSCluster}} in another in HDFS project?

What do you think?

> MiniYarnCluster contains deprecated constructor which is called by the other 
> constructors
> -
>
> Key: YARN-5007
> URL: https://issues.apache.org/jira/browse/YARN-5007
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: timelineserver
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 2.8.0
>
>
> MiniYarnCluster has a deprecated constructor which is called by the other 
> constructors and it causes javac warnings during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4766) NM should not aggregate logs older than the retention policy

2016-05-19 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-4766:
-
Attachment: yarn4766.005.patch

> NM should not aggregate logs older than the retention policy
> 
>
> Key: YARN-4766
> URL: https://issues.apache.org/jira/browse/YARN-4766
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: yarn4766.001.patch, yarn4766.002.patch, 
> yarn4766.003.patch, yarn4766.004.patch, yarn4766.004.patch, yarn4766.005.patch
>
>
> When a log aggregation fails on the NM the information is for the attempt is 
> kept in the recovery DB. Log aggregation can fail for multiple reasons which 
> are often related to HDFS space or permissions.
> On restart the recovery DB is read and if an application attempt needs its 
> logs aggregated, the files are scheduled for aggregation without any checks. 
> The log files could be older than the retention limit in which case we should 
> not aggregate them but immediately mark them for deletion from the local file 
> system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4766) NM should not aggregate logs older than the retention policy

2016-05-19 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292217#comment-15292217
 ] 

Haibo Chen commented on YARN-4766:
--

Thanks Ray for your comment, the name you suggested is much better. Made the 
change and upload a new patch to verify the patch still works since it is been 
sitting for a while.

> NM should not aggregate logs older than the retention policy
> 
>
> Key: YARN-4766
> URL: https://issues.apache.org/jira/browse/YARN-4766
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: yarn4766.001.patch, yarn4766.002.patch, 
> yarn4766.003.patch, yarn4766.004.patch, yarn4766.004.patch
>
>
> When a log aggregation fails on the NM the information is for the attempt is 
> kept in the recovery DB. Log aggregation can fail for multiple reasons which 
> are often related to HDFS space or permissions.
> On restart the recovery DB is read and if an application attempt needs its 
> logs aggregated, the files are scheduled for aggregation without any checks. 
> The log files could be older than the retention limit in which case we should 
> not aggregate them but immediately mark them for deletion from the local file 
> system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-19 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292213#comment-15292213
 ] 

Junping Du commented on YARN-5112:
--

Thanks [~jianhe] for updating the patch. Remove verifyAndCreateRemoteLogDir() 
from initApp() could have two issues:
- YarnRuntimeException could be thrown without proper handling that will crash 
LogAggregationService/ContainerManager/NM.
- If root directory is deleted by admin/tools unintentionally, then new 
launched apps after the mistaking operation have no chance to fix it.
I think the safe way is tracking down the permission issue and only log for the 
first time we hit or the 1st time after permission back to normal for a while. 
What do you think?

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5076) YARN web interfaces lack XFS protection

2016-05-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292192#comment-15292192
 ] 

Hudson commented on YARN-5076:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9826 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9826/])
YARN-5076. YARN web interfaces lack XFS protection. Contributed by (junping_du: 
rev 22fcd819f0c445be661e644ed67221f867013af8)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryClientService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/webapp/TestRMWithXFSFilter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JHAdminConfig.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml


> YARN web interfaces lack XFS protection
> ---
>
> Key: YARN-5076
> URL: https://issues.apache.org/jira/browse/YARN-5076
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager, timelineserver
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
> Fix For: 2.9.0
>
> Attachments: YARN-5076.002.patch, YARN-5076.003.patch, 
> YARN-5076.004.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There are web interfaces in YARN that do not provide protection against cross 
> frame scripting 
> (https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet).  
> HADOOP-13008 provides a common filter for addressing this vulnerability, so 
> this filter should be integrated into the YARN web interfaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292175#comment-15292175
 ] 

Jian He commented on YARN-4464:
---

The patch is changing the config for max number of completed apps RM keeps in 
memory, NOT in store. Am I missing something ?

Also, is the intention to not keep any completed apps in store ?   IIRC, 
recovering 10k apps from ZK/HDFS is taking about less than 30 seconds. 
[~magnum], are you sure the 20 min delay is caused by recovering these 10k 
competed apps ?
bq. I think setting it to 0 might be better so users immediately realize they 
don't see any completed apps after a restart.
[~kasha], why do you think it's better to not let users see the historical apps 
? I feel it is sometimes convenient to see the finished apps after restart. 

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4878) Expose scheduling policy and max running apps over JMX for Yarn queues

2016-05-19 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4878:
---
Attachment: YARN-4878.004.patch

The patch 004 fixed the style issue except the first one. The first style issue 
should not be fixed. 

> Expose scheduling policy and max running apps over JMX for Yarn queues
> --
>
> Key: YARN-4878
> URL: https://issues.apache.org/jira/browse/YARN-4878
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4878.001.patch, YARN-4878.002.patch, 
> YARN-4878.003.patch, YARN-4878.004.patch
>
>
> There are two things that are not currently visible over JMX: the current 
> scheduling policy for a queue, and the number of max running apps. It would 
> be great if these could be exposed over JMX as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5114) TestRMWebServicesApps#testInvalidAppAttempts failure

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292140#comment-15292140
 ] 

Hadoop QA commented on YARN-5114:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
37s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} branch-2.8 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} branch-2.8 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} branch-2.8 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 49s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 19s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 160m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_101 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c60792e |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804998/YARN-5114-branch-2.8.001.patch
 |
| JIRA Issue | YARN-5114 |
| Optional Tests |  asflicense  compile  javac  

[jira] [Commented] (YARN-5076) YARN web interfaces lack XFS protection

2016-05-19 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292130#comment-15292130
 ] 

Junping Du commented on YARN-5076:
--

The other check-style issue is just a minor whitespace issue. Will fix it in 
commit.
+1 on 004 patch. Will commit it shortly.

> YARN web interfaces lack XFS protection
> ---
>
> Key: YARN-5076
> URL: https://issues.apache.org/jira/browse/YARN-5076
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager, timelineserver
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
> Attachments: YARN-5076.002.patch, YARN-5076.003.patch, 
> YARN-5076.004.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There are web interfaces in YARN that do not provide protection against cross 
> frame scripting 
> (https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet).  
> HADOOP-13008 provides a common filter for addressing this vulnerability, so 
> this filter should be integrated into the YARN web interfaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5076) YARN web interfaces lack XFS protection

2016-05-19 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292126#comment-15292126
 ] 

Junping Du commented on YARN-5076:
--

That's right. Just check again that it sounds like we were not adding 
package-info for all test classes' package. Ok. Let's keep it is right now.

> YARN web interfaces lack XFS protection
> ---
>
> Key: YARN-5076
> URL: https://issues.apache.org/jira/browse/YARN-5076
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, resourcemanager, timelineserver
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
> Attachments: YARN-5076.002.patch, YARN-5076.003.patch, 
> YARN-5076.004.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There are web interfaces in YARN that do not provide protection against cross 
> frame scripting 
> (https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet).  
> HADOOP-13008 provides a common filter for addressing this vulnerability, so 
> this filter should be integrated into the YARN web interfaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5093) created time shows 0 in most REST output

2016-05-19 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5093:
---
Attachment: YARN-5093-YARN-2928.01.patch

> created time shows 0 in most REST output
> 
>
> Key: YARN-5093
> URL: https://issues.apache.org/jira/browse/YARN-5093
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5093-YARN-2928.01.patch
>
>
> When querying the REST API, I find that the created time value is returned as 
> "0" for most of the output. It includes:
> - flow activity and flow runs in the flow activity page
> - apps in the application page
> - entities in the entity page
> For example, in the flow activity page,
> {noformat}
> {
> metrics: [ ],
> events: [ ],
> id: "yarn_cluster/146335680/sjlee@ds-date",
> type: "YARN_FLOW_ACTIVITY",
> createdtime: 0,
> flowruns: [
> {
> metrics: [ ],
> events: [ ],
> id: "sjlee@ds-date/1463435661428",
> type: "YARN_FLOW_RUN",
> createdtime: 0,
> info: {
> SYSTEM_INFO_FLOW_VERSION: "1",
> SYSTEM_INFO_FLOW_RUN_ID: 1463435661428,
> SYSTEM_INFO_FLOW_NAME: "ds-date",
> SYSTEM_INFO_USER: "sjlee"
> },
> isrelatedto: { },
> relatesto: { }
> }
> ],
> info: {
> SYSTEM_INFO_CLUSTER: "yarn_cluster",
> UID: "yarn_cluster!sjlee!ds-date",
> SYSTEM_INFO_FLOW_NAME: "ds-date",
> SYSTEM_INFO_DATE: 146335680,
> SYSTEM_INFO_USER: "sjlee"
> },
> isrelatedto: { },
> relatesto: { }
> }
> {noformat}
> The only page that appears to show the proper created time value is the flow 
> run page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292113#comment-15292113
 ] 

Karthik Kambatla commented on YARN-4464:


+1. Will commit this to trunk tomorrow if no one objects. /cc [~jianhe]

Don't think we should do this in branch-2 though. 

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4006) YARN ATS Alternate Kerberos HTTP Authentication Changes

2016-05-19 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292111#comment-15292111
 ] 

Allen Wittenauer commented on YARN-4006:


Talking with my client, they've tried their AltKerb implementation (which 
properly inherits from AltKerb, unlike the sample), and with some changes that 
super up into AltKerb for management operations (which weren't required 
before), this patch worked.

I'm inclined to think this is the lesser of multiple evils and unblocks 
timeline server deployments at secure sites.  I'd like a second opinion though. 
 [~lmccay], have you looked at AltKerberos at all?  Does this approach seem 
sane?

> YARN ATS Alternate Kerberos HTTP Authentication Changes
> ---
>
> Key: YARN-4006
> URL: https://issues.apache.org/jira/browse/YARN-4006
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, timelineserver
>Affects Versions: 2.5.0, 2.6.0, 2.7.0, 2.5.1, 2.6.1, 2.8.0, 2.7.1, 2.7.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Blocker
> Attachments: YARN-4006-branch-trunk.patch, 
> YARN-4006-branch2.6.0.patch, sample-ats-alt-auth.patch
>
>
> When attempting to use The Hadoop Alternate Authentication Classes. They do 
> not exactly work with what was built with YARN-1935.
> I went ahead and made the following changes to support using a Custom 
> AltKerberos DelegationToken custom class.
> Changes to: TimelineAuthenticationFilterInitializer.class
> {code}
>String authType = filterConfig.get(AuthenticationFilter.AUTH_TYPE);
> LOG.info("AuthType Configured: "+authType);
> if (authType.equals(PseudoAuthenticationHandler.TYPE)) {
>   filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   PseudoDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: PseudoDelegationTokenAuthenticationHandler");
> } else if (authType.equals(KerberosAuthenticationHandler.TYPE) || 
> (UserGroupInformation.isSecurityEnabled() && 
> conf.get("hadoop.security.authentication").equals(KerberosAuthenticationHandler.TYPE)))
>  {
>   if (!(authType.equals(KerberosAuthenticationHandler.TYPE))) {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   authType);
> LOG.info("AuthType: "+authType);
>   } else {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   KerberosDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: KerberosDelegationTokenAuthenticationHandler");
>   } 
>   // Resolve _HOST into bind address
>   String bindAddress = conf.get(HttpServer2.BIND_ADDRESS);
>   String principal =
>   filterConfig.get(KerberosAuthenticationHandler.PRINCIPAL);
>   if (principal != null) {
> try {
>   principal = SecurityUtil.getServerPrincipal(principal, bindAddress);
> } catch (IOException ex) {
>   throw new RuntimeException(
>   "Could not resolve Kerberos principal name: " + ex.toString(), 
> ex);
> }
> filterConfig.put(KerberosAuthenticationHandler.PRINCIPAL,
> principal);
>   }
> }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-05-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4844:
-
Attachment: YARN-4844.11.patch

> Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource
> 
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-4844.1.patch, YARN-4844.10.patch, 
> YARN-4844.11.patch, YARN-4844.2.patch, YARN-4844.3.patch, YARN-4844.4.patch, 
> YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch, 
> YARN-4844.8.branch-2.patch, YARN-4844.8.patch, YARN-4844.9.branch, 
> YARN-4844.9.branch-2.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-05-19 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-1942:
-
Attachment: YARN-1942.8.patch

> Many of ConverterUtils methods need to have public interfaces
> -
>
> Key: YARN-1942
> URL: https://issues.apache.org/jira/browse/YARN-1942
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.4.0
>Reporter: Thomas Graves
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-1942.1.patch, YARN-1942.2.patch, YARN-1942.3.patch, 
> YARN-1942.4.patch, YARN-1942.5.patch, YARN-1942.6.patch, YARN-1942.8.patch
>
>
> ConverterUtils has a bunch of functions that are useful to application 
> masters.   It should either be made public or we make some of the utilities 
> in it public or we provide other external apis for application masters to 
> use.  Note that distributedshell and MR are both using these interfaces. 
> For instance the main use case I see right now is for getting the application 
> attempt id within the appmaster:
> String containerIdStr =
>   System.getenv(Environment.CONTAINER_ID.name());
> ConverterUtils.toContainerId
> ContainerId containerId = ConverterUtils.toContainerId(containerIdStr);
>   ApplicationAttemptId applicationAttemptId =
>   containerId.getApplicationAttemptId();
> I don't see any other way for the application master to get this information. 
>  If there is please let me know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5093) created time shows 0 in most REST output

2016-05-19 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5093:
---
Attachment: YARN-5093-YARN-2928.01.patch

> created time shows 0 in most REST output
> 
>
> Key: YARN-5093
> URL: https://issues.apache.org/jira/browse/YARN-5093
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5093-YARN-2928.01.patch
>
>
> When querying the REST API, I find that the created time value is returned as 
> "0" for most of the output. It includes:
> - flow activity and flow runs in the flow activity page
> - apps in the application page
> - entities in the entity page
> For example, in the flow activity page,
> {noformat}
> {
> metrics: [ ],
> events: [ ],
> id: "yarn_cluster/146335680/sjlee@ds-date",
> type: "YARN_FLOW_ACTIVITY",
> createdtime: 0,
> flowruns: [
> {
> metrics: [ ],
> events: [ ],
> id: "sjlee@ds-date/1463435661428",
> type: "YARN_FLOW_RUN",
> createdtime: 0,
> info: {
> SYSTEM_INFO_FLOW_VERSION: "1",
> SYSTEM_INFO_FLOW_RUN_ID: 1463435661428,
> SYSTEM_INFO_FLOW_NAME: "ds-date",
> SYSTEM_INFO_USER: "sjlee"
> },
> isrelatedto: { },
> relatesto: { }
> }
> ],
> info: {
> SYSTEM_INFO_CLUSTER: "yarn_cluster",
> UID: "yarn_cluster!sjlee!ds-date",
> SYSTEM_INFO_FLOW_NAME: "ds-date",
> SYSTEM_INFO_DATE: 146335680,
> SYSTEM_INFO_USER: "sjlee"
> },
> isrelatedto: { },
> relatesto: { }
> }
> {noformat}
> The only page that appears to show the proper created time value is the flow 
> run page.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4002) make ResourceTrackerService.nodeHeartbeat more concurrent

2016-05-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292058#comment-15292058
 ] 

Hudson commented on YARN-4002:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9825 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9825/])
YARN-4002. Make ResourceTrackerService#nodeHeartbeat more concurrent. (jianhe: 
rev feb90ffcca536e7deac50976b8a8774450fe089f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HostsFileReader.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/NodesListManager.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestHostsFileReader.java


> make ResourceTrackerService.nodeHeartbeat more concurrent
> -
>
> Key: YARN-4002
> URL: https://issues.apache.org/jira/browse/YARN-4002
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Hong Zhiguo
>Assignee: Hong Zhiguo
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: 0001-YARN-4002.patch, YARN-4002-lockless-read.patch, 
> YARN-4002-rwlock-v2.patch, YARN-4002-rwlock-v2.patch, 
> YARN-4002-rwlock-v3-rebase.patch, YARN-4002-rwlock-v3.patch, 
> YARN-4002-rwlock-v4.patch, YARN-4002-rwlock-v5.patch, 
> YARN-4002-rwlock-v6.patch, YARN-4002-rwlock.patch, YARN-4002-v0.patch
>
>
> We have multiple RPC threads to handle NodeHeartbeatRequest from NMs. By 
> design the method ResourceTrackerService.nodeHeartbeat should be concurrent 
> enough to scale for large clusters.
> But we have a "BIG" lock in NodesListManager.isValidNode which I think it's 
> unnecessary.
> First, the fields "includes" and "excludes" of HostsFileReader are only 
> updated on "refresh nodes".  All RPC threads handling node heartbeats are 
> only readers.  So RWLock could be used to  alow concurrent access by RPC 
> threads.
> Second, since he fields "includes" and "excludes" of HostsFileReader are 
> always updated by "reference assignment", which is atomic in Java, the reader 
> side lock could just be skipped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5055) max per user can be larger than max per queue

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292053#comment-15292053
 ] 

Hadoop QA commented on YARN-5055:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 56s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805025/YARN-5055.002.patch |
| JIRA Issue | YARN-5055 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 194744ed6b02 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 141873c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11565/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11565/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11565/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

[jira] [Commented] (YARN-5096) timelinereader has a lot of logging that's not useful

2016-05-19 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292049#comment-15292049
 ] 

Varun Saxena commented on YARN-5096:


[~sjlee0], ColumnHelper has another null prefix log at line 262.
Probably we should make that DEBUG as well. It appears in my testing, probably 
for configs. Doesnt seem useful.

> timelinereader has a lot of logging that's not useful
> -
>
> Key: YARN-5096
> URL: https://issues.apache.org/jira/browse/YARN-5096
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
>  Labels: yarn-2928-1st-milestone
> Fix For: YARN-2928
>
> Attachments: YARN-5096-YARN-2928.01.patch
>
>
> After running about a dozen or so requests, the timelinereader log is filled 
> with the following logging entries:
> {noformat}
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> 2016-05-16 15:59:13,364 INFO 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnHelper: 
> null prefix was specified; returning all columns
> {noformat}
> There were some ~ 3,000 such logging entries. It's too excessive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292021#comment-15292021
 ] 

Hadoop QA commented on YARN-4464:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 4s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805027/YARN-4464.003.patch |
| JIRA Issue | YARN-4464 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux d299ee3e3c97 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 141873c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11566/testReport/ |
| modules | C:  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common  U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 

[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292009#comment-15292009
 ] 

Ray Chiang commented on YARN-4464:
--

+1 (nonbinding), pending Jenkins.  Passed TestYarnConfigurationFields unit test 
+ visual inspection in my tree.

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-4464:
---
Attachment: YARN-4464.003.patch

Oy vey.  I clearly need sleep.  Here's the correct correct patch.

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-19 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291983#comment-15291983
 ] 

Varun Saxena commented on YARN-5109:


Writing down what we discussed in the meeting with regards to this.

I have attached a WIP patch of whatever I had done so far.
Some things may have to be fine tuned(null checks etc.). Probably both limit 
and sizes need not be passed into splitRanges. But anyways this works well for 
row keys as per my tests.

For column qualifiers,we could either do encoding of bytes(representing longs, 
ints, app ids') before writing them into backend or use the same approach which 
we adopted for row keys i.e. do not encode but split by ignoring separators for 
longs, ints, etc. till they are fully read.
I had a patch for former too but consensus seems to be towards latter.

Sangjin and Joep thought that we should reorganize our row key and column 
qualifier parsing logic into a set of converters. It was decided that this 
approach will be explored further.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291981#comment-15291981
 ] 

Ray Chiang commented on YARN-4464:
--

This patch looks like it modifies the property 
yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size in 
yarn-default.xml, not 
yarn.resourcemanager.state-store.max-completed-applications.  Or am I missing 
something?

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-05-19 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4876:
--
Attachment: YARN-4876.002.patch

> [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop
> --
>
> Key: YARN-4876
> URL: https://issues.apache.org/jira/browse/YARN-4876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Marco Rabozzi
> Attachments: YARN-4876-design-doc.pdf, YARN-4876.002.patch, 
> YARN-4876.01.patch
>
>
> Introduce *initialize* and *destroy* container API into the 
> *ContainerManagementProtocol* and decouple the actual start of a container 
> from the initialization. This will allow AMs to re-start a container without 
> having to lose the allocation.
> Additionally, if the localization of the container is associated to the 
> initialize (and the cleanup with the destroy), This can also be used by 
> applications to upgrade a Container by *re-initializing* with a new 
> *ContainerLaunchContext*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5055) max per user can be larger than max per queue

2016-05-19 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-5055:
--
Attachment: YARN-5055.002.patch

Fixing checkstyle.

> max per user can be larger than max per queue
> -
>
> Key: YARN-5055
> URL: https://issues.apache.org/jira/browse/YARN-5055
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Eric Badger
>Priority: Minor
> Attachments: YARN-5055.001.patch, YARN-5055.002.patch
>
>
> If user limit and/or user limit factor are >100% then the calculated maximum 
> values per user can exceed the maximum for the queue.  For example, maximum 
> AM resource per user could exceed maximum AM resource for the entire queue, 
> or max applications per user could be larger than max applications for the 
> queue.  The per-user values should be capped by the per queue values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5055) max per user can be larger than max per queue

2016-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291962#comment-15291962
 ] 

Hadoop QA commented on YARN-5055:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 28s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 1 new + 146 unchanged - 0 fixed = 147 total (was 146) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 57s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 4s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804768/YARN-5055.001.patch |
| JIRA Issue | YARN-5055 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux eb425b9eefa5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 141873c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11562/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11562/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  

[jira] [Commented] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-05-19 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291963#comment-15291963
 ] 

Arun Suresh commented on YARN-4876:
---

Attaching rebased patch..

> [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop
> --
>
> Key: YARN-4876
> URL: https://issues.apache.org/jira/browse/YARN-4876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Marco Rabozzi
> Attachments: YARN-4876-design-doc.pdf, YARN-4876.002.patch, 
> YARN-4876.01.patch
>
>
> Introduce *initialize* and *destroy* container API into the 
> *ContainerManagementProtocol* and decouple the actual start of a container 
> from the initialization. This will allow AMs to re-start a container without 
> having to lose the allocation.
> Additionally, if the localization of the container is associated to the 
> initialize (and the cleanup with the destroy), This can also be used by 
> applications to upgrade a Container by *re-initializing* with a new 
> *ContainerLaunchContext*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4876) [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop

2016-05-19 Thread Marco Rabozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291954#comment-15291954
 ] 

Marco Rabozzi commented on YARN-4876:
-

Thanks [~vvasudev] for looking at the patch!
Regarding the allowsRestart() function, it refers to whether the AM is allowed 
to issue multiple start or initialize requests for the container. If the AM 
starts a container with the standard startContainers API, allowsRestart() will 
return false and re-initialization or re-start won't be allowed reflecting the 
current APIs behavior. On the other hand, if the AM uses the new 
initializeContainers API, allowsRestart() will return true and the AM will be 
able to issue multiple startContainers and initializeContainers requests for 
the container.

> [Phase 1] Decoupled Init / Destroy of Containers from Start / Stop
> --
>
> Key: YARN-4876
> URL: https://issues.apache.org/jira/browse/YARN-4876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Marco Rabozzi
> Attachments: YARN-4876-design-doc.pdf, YARN-4876.01.patch
>
>
> Introduce *initialize* and *destroy* container API into the 
> *ContainerManagementProtocol* and decouple the actual start of a container 
> from the initialization. This will allow AMs to re-start a container without 
> having to lose the allocation.
> Additionally, if the localization of the container is associated to the 
> initialize (and the cleanup with the destroy), This can also be used by 
> applications to upgrade a Container by *re-initializing* with a new 
> *ContainerLaunchContext*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-19 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5109:
---
Attachment: YARN-5109-YARN-2928.01.patch

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5115) Security risk by using CONTENT-DISPOSITION header

2016-05-19 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291905#comment-15291905
 ] 

Xuan Gong commented on YARN-5115:
-

Trivial patch to simply remove CONTENT-DISPOSITION header in 
NMWebSerivce/AHSWebService.

> Security risk by using CONTENT-DISPOSITION header
> -
>
> Key: YARN-5115
> URL: https://issues.apache.org/jira/browse/YARN-5115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5115.1.patch
>
>
> In NMWebService/AHSWebservice, we have used CONTENT-DISPOSITION header for 
> show/download container logs. Looks like it has security risks. And people 
> have devised content-disposition hacking.
> The HTTP 1.1 Standard (RFC 2616) also mentions the possible security side 
> effects of content disposition:
> {code}
> 15.5 Content-Disposition Issues
>RFC 1806 [35], from which the often implemented Content-Disposition
>(see section 19.5.1) header in HTTP is derived, has a number of very
>serious security considerations. Content-Disposition is not part of
>the HTTP standard, but since it is widely implemented, we are
>documenting its use and risks for implementors. See RFC 2183 [49]
>(which updates RFC 1806) for details.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-4464:
---
Attachment: YARN-4464.002.patch

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-19 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-4464:
---
Attachment: (was: YARN-4664.002.patch)

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5115) Security risk by using CONTENT-DISPOSITION header

2016-05-19 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291907#comment-15291907
 ] 

Xuan Gong commented on YARN-5115:
-

Did manual testing without setting CONTENT-DISPOSITION header, we could still 
review and download the log files from NMWebService/AHSWebService.

> Security risk by using CONTENT-DISPOSITION header
> -
>
> Key: YARN-5115
> URL: https://issues.apache.org/jira/browse/YARN-5115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5115.1.patch
>
>
> In NMWebService/AHSWebservice, we have used CONTENT-DISPOSITION header for 
> show/download container logs. Looks like it has security risks. And people 
> have devised content-disposition hacking.
> The HTTP 1.1 Standard (RFC 2616) also mentions the possible security side 
> effects of content disposition:
> {code}
> 15.5 Content-Disposition Issues
>RFC 1806 [35], from which the often implemented Content-Disposition
>(see section 19.5.1) header in HTTP is derived, has a number of very
>serious security considerations. Content-Disposition is not part of
>the HTTP standard, but since it is widely implemented, we are
>documenting its use and risks for implementors. See RFC 2183 [49]
>(which updates RFC 1806) for details.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5115) Security risk by using CONTENT-DISPOSITION header

2016-05-19 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5115:

Attachment: YARN-5115.1.patch

> Security risk by using CONTENT-DISPOSITION header
> -
>
> Key: YARN-5115
> URL: https://issues.apache.org/jira/browse/YARN-5115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5115.1.patch
>
>
> In NMWebService/AHSWebservice, we have used CONTENT-DISPOSITION header for 
> show/download container logs. Looks like it has security risks. And people 
> have devised content-disposition hacking.
> The HTTP 1.1 Standard (RFC 2616) also mentions the possible security side 
> effects of content disposition:
> {code}
> 15.5 Content-Disposition Issues
>RFC 1806 [35], from which the often implemented Content-Disposition
>(see section 19.5.1) header in HTTP is derived, has a number of very
>serious security considerations. Content-Disposition is not part of
>the HTTP standard, but since it is widely implemented, we are
>documenting its use and risks for implementors. See RFC 2183 [49]
>(which updates RFC 1806) for details.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5115) Security risk by using CONTENT-DISPOSITION header

2016-05-19 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-5115:
---

 Summary: Security risk by using CONTENT-DISPOSITION header
 Key: YARN-5115
 URL: https://issues.apache.org/jira/browse/YARN-5115
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xuan Gong


In NMWebService/AHSWebservice, we have used CONTENT-DISPOSITION header for 
show/download container logs. Looks like it has security risks. And people have 
devised content-disposition hacking.
The HTTP 1.1 Standard (RFC 2616) also mentions the possible security side 
effects of content disposition:
{code}
15.5 Content-Disposition Issues

   RFC 1806 [35], from which the often implemented Content-Disposition
   (see section 19.5.1) header in HTTP is derived, has a number of very
   serious security considerations. Content-Disposition is not part of
   the HTTP standard, but since it is widely implemented, we are
   documenting its use and risks for implementors. See RFC 2183 [49]
   (which updates RFC 1806) for details.
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5115) Security risk by using CONTENT-DISPOSITION header

2016-05-19 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong reassigned YARN-5115:
---

Assignee: Xuan Gong

> Security risk by using CONTENT-DISPOSITION header
> -
>
> Key: YARN-5115
> URL: https://issues.apache.org/jira/browse/YARN-5115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>
> In NMWebService/AHSWebservice, we have used CONTENT-DISPOSITION header for 
> show/download container logs. Looks like it has security risks. And people 
> have devised content-disposition hacking.
> The HTTP 1.1 Standard (RFC 2616) also mentions the possible security side 
> effects of content disposition:
> {code}
> 15.5 Content-Disposition Issues
>RFC 1806 [35], from which the often implemented Content-Disposition
>(see section 19.5.1) header in HTTP is derived, has a number of very
>serious security considerations. Content-Disposition is not part of
>the HTTP standard, but since it is widely implemented, we are
>documenting its use and risks for implementors. See RFC 2183 [49]
>(which updates RFC 1806) for details.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >