[jira] [Commented] (YARN-3070) TestRMAdminCLI#testHelp fails for transitionToActive command

2015-01-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281667#comment-14281667
 ] 

Hadoop QA commented on YARN-3070:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692959/YARN-3070.patch
  against trunk revision 24315e7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client:

  org.apache.hadoop.yarn.client.TestResourceTrackerOnHA
  
org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA
  org.apache.hadoop.yarn.client.cli.TestRMAdminCLI

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6355//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6355//console

This message is automatically generated.

> TestRMAdminCLI#testHelp fails for transitionToActive command
> 
>
> Key: YARN-3070
> URL: https://issues.apache.org/jira/browse/YARN-3070
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Junping Du
>Priority: Minor
> Attachments: YARN-3070.patch
>
>
> {code}
>   testError(new String[] { "-help", "-transitionToActive" },
>   "Usage: yarn rmadmin [-transitionToActive " +
>   " [--forceactive]]", dataErr, 0);
> {code}
> fails with:
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testError(TestRMAdminCLI.java:547)
>   at 
> org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testHelp(TestRMAdminCLI.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3070) TestRMAdminCLI#testHelp fails for transitionToActive command

2015-01-17 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281665#comment-14281665
 ] 

Ted Yu commented on YARN-3070:
--

Thanks Junping for taking care of this.

> TestRMAdminCLI#testHelp fails for transitionToActive command
> 
>
> Key: YARN-3070
> URL: https://issues.apache.org/jira/browse/YARN-3070
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Junping Du
>Priority: Minor
> Attachments: YARN-3070.patch
>
>
> {code}
>   testError(new String[] { "-help", "-transitionToActive" },
>   "Usage: yarn rmadmin [-transitionToActive " +
>   " [--forceactive]]", dataErr, 0);
> {code}
> fails with:
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testError(TestRMAdminCLI.java:547)
>   at 
> org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testHelp(TestRMAdminCLI.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3070) TestRMAdminCLI#testHelp fails for transitionToActive command

2015-01-17 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-3070:
-
Attachment: YARN-3070.patch

Thanks [~te...@apache.org] for reporting this. 
Deliver a quick patch to fix it. Also, add more info when checking Assert.

> TestRMAdminCLI#testHelp fails for transitionToActive command
> 
>
> Key: YARN-3070
> URL: https://issues.apache.org/jira/browse/YARN-3070
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Junping Du
>Priority: Minor
> Attachments: YARN-3070.patch
>
>
> {code}
>   testError(new String[] { "-help", "-transitionToActive" },
>   "Usage: yarn rmadmin [-transitionToActive " +
>   " [--forceactive]]", dataErr, 0);
> {code}
> fails with:
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testError(TestRMAdminCLI.java:547)
>   at 
> org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testHelp(TestRMAdminCLI.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2990) FairScheduler's delay-scheduling always waits for node-local and rack-local delays, even for off-rack-only requests

2015-01-17 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281606#comment-14281606
 ] 

Anubhav Dhoot commented on YARN-2990:
-

Can anyNodeLocalRequests work by simply checking getResourceRequests at a 
priority? would that not include rack local requests as well? 

> FairScheduler's delay-scheduling always waits for node-local and rack-local 
> delays, even for off-rack-only requests
> ---
>
> Key: YARN-2990
> URL: https://issues.apache.org/jira/browse/YARN-2990
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-2990-0.patch, yarn-2990-test.patch
>
>
> Looking at the FairScheduler, it appears the node/rack locality delays are 
> used for all requests, even those that are only off-rack. 
> More details in comments. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-3070) TestRMAdminCLI#testHelp fails for transitionToActive command

2015-01-17 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reassigned YARN-3070:


Assignee: Junping Du

> TestRMAdminCLI#testHelp fails for transitionToActive command
> 
>
> Key: YARN-3070
> URL: https://issues.apache.org/jira/browse/YARN-3070
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Junping Du
>Priority: Minor
>
> {code}
>   testError(new String[] { "-help", "-transitionToActive" },
>   "Usage: yarn rmadmin [-transitionToActive " +
>   " [--forceactive]]", dataErr, 0);
> {code}
> fails with:
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testError(TestRMAdminCLI.java:547)
>   at 
> org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testHelp(TestRMAdminCLI.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3030) set up ATS writer with basic request serving structure and lifecycle

2015-01-17 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281552#comment-14281552
 ] 

Sangjin Lee commented on YARN-3030:
---

It would be nice if there was a way to do it with the build server. If that is 
not feasible, I'd say run test-patch.sh by hand to ensure basic issues are 
caught. Even if it adds a little bit of time for each patch, it'd be great not 
to accumulate tech debt.

I just did it locally:

{noformat}
dev-support/test-patch.sh --dirty-workspace --build-native=false 
YARN-3030.001.patch
{noformat}

It's green.

{color:green}+1 overall{color}.  

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version ) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.


> set up ATS writer with basic request serving structure and lifecycle
> 
>
> Key: YARN-3030
> URL: https://issues.apache.org/jira/browse/YARN-3030
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: YARN-3030.001.patch
>
>
> Per design in YARN-2928, create an ATS writer as a service, and implement the 
> basic service structure including the lifecycle management.
> Also, as part of this JIRA, we should come up with the ATS client API for 
> sending requests to this ATS writer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2984) Metrics for container's actual memory usage

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281417#comment-14281417
 ] 

Hudson commented on YARN-2984:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2027 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2027/])
YARN-2984. Metrics for container's actual memory usage. (kasha) (kasha: rev 
84198564ba6028d51c1fcf9cdcb87f6ae6e08513)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainerMetrics.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsCollectorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


> Metrics for container's actual memory usage
> ---
>
> Key: YARN-2984
> URL: https://issues.apache.org/jira/browse/YARN-2984
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.7.0
>
> Attachments: yarn-2984-1.patch, yarn-2984-2.patch, yarn-2984-3.patch, 
> yarn-2984-prelim.patch
>
>
> It would be nice to capture resource usage per container, for a variety of 
> reasons. This JIRA is to track memory usage. 
> YARN-2965 tracks the resource usage on the node, and the two implementations 
> should reuse code as much as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2815) Remove jline from hadoop-yarn-server-common

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281416#comment-14281416
 ] 

Hudson commented on YARN-2815:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2027 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2027/])
YARN-2815. Excluded transitive dependency of JLine in 
hadoop-yarn-server-common. Contributed by Ferdinand Xu. (zjshen: rev 
43302f6f44f97d67069eefdda986b6da2933393e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* hadoop-yarn-project/CHANGES.txt


> Remove jline from hadoop-yarn-server-common
> ---
>
> Key: YARN-2815
> URL: https://issues.apache.org/jira/browse/YARN-2815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Fix For: 2.7.0
>
> Attachments: YARN-2815.patch
>
>
> hadoop-yarn-server-common bundles ancient jline-0.9.94 as transitive 
> dependency of zookeeper-3.4.6 (it is used in the ZK CLI and not in Hadoop). 
> Beeline is moving to JLine2 and have to exclude this jline dependency so that 
> the hadoop classpath can use their own version (there is a newer JLine 2.12 
> that contains history search etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2984) Metrics for container's actual memory usage

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281402#comment-14281402
 ] 

Hudson commented on YARN-2984:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #77 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/77/])
YARN-2984. Metrics for container's actual memory usage. (kasha) (kasha: rev 
84198564ba6028d51c1fcf9cdcb87f6ae6e08513)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsCollectorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainerMetrics.java


> Metrics for container's actual memory usage
> ---
>
> Key: YARN-2984
> URL: https://issues.apache.org/jira/browse/YARN-2984
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.7.0
>
> Attachments: yarn-2984-1.patch, yarn-2984-2.patch, yarn-2984-3.patch, 
> yarn-2984-prelim.patch
>
>
> It would be nice to capture resource usage per container, for a variety of 
> reasons. This JIRA is to track memory usage. 
> YARN-2965 tracks the resource usage on the node, and the two implementations 
> should reuse code as much as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2815) Remove jline from hadoop-yarn-server-common

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281401#comment-14281401
 ] 

Hudson commented on YARN-2815:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #77 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/77/])
YARN-2815. Excluded transitive dependency of JLine in 
hadoop-yarn-server-common. Contributed by Ferdinand Xu. (zjshen: rev 
43302f6f44f97d67069eefdda986b6da2933393e)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml


> Remove jline from hadoop-yarn-server-common
> ---
>
> Key: YARN-2815
> URL: https://issues.apache.org/jira/browse/YARN-2815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Fix For: 2.7.0
>
> Attachments: YARN-2815.patch
>
>
> hadoop-yarn-server-common bundles ancient jline-0.9.94 as transitive 
> dependency of zookeeper-3.4.6 (it is used in the ZK CLI and not in Hadoop). 
> Beeline is moving to JLine2 and have to exclude this jline dependency so that 
> the hadoop classpath can use their own version (there is a newer JLine 2.12 
> that contains history search etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-3070) TestRMAdminCLI#testHelp fails for transitionToActive command

2015-01-17 Thread Ted Yu (JIRA)
Ted Yu created YARN-3070:


 Summary: TestRMAdminCLI#testHelp fails for transitionToActive 
command
 Key: YARN-3070
 URL: https://issues.apache.org/jira/browse/YARN-3070
 Project: Hadoop YARN
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


{code}
  testError(new String[] { "-help", "-transitionToActive" },
  "Usage: yarn rmadmin [-transitionToActive " +
  " [--forceactive]]", dataErr, 0);
{code}
fails with:
{code}
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testError(TestRMAdminCLI.java:547)
at 
org.apache.hadoop.yarn.client.cli.TestRMAdminCLI.testHelp(TestRMAdminCLI.java:335)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2815) Remove jline from hadoop-yarn-server-common

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281368#comment-14281368
 ] 

Hudson commented on YARN-2815:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/73/])
YARN-2815. Excluded transitive dependency of JLine in 
hadoop-yarn-server-common. Contributed by Ferdinand Xu. (zjshen: rev 
43302f6f44f97d67069eefdda986b6da2933393e)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml


> Remove jline from hadoop-yarn-server-common
> ---
>
> Key: YARN-2815
> URL: https://issues.apache.org/jira/browse/YARN-2815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Fix For: 2.7.0
>
> Attachments: YARN-2815.patch
>
>
> hadoop-yarn-server-common bundles ancient jline-0.9.94 as transitive 
> dependency of zookeeper-3.4.6 (it is used in the ZK CLI and not in Hadoop). 
> Beeline is moving to JLine2 and have to exclude this jline dependency so that 
> the hadoop classpath can use their own version (there is a newer JLine 2.12 
> that contains history search etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2984) Metrics for container's actual memory usage

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281374#comment-14281374
 ] 

Hudson commented on YARN-2984:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2008 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2008/])
YARN-2984. Metrics for container's actual memory usage. (kasha) (kasha: rev 
84198564ba6028d51c1fcf9cdcb87f6ae6e08513)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainerMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsCollectorImpl.java


> Metrics for container's actual memory usage
> ---
>
> Key: YARN-2984
> URL: https://issues.apache.org/jira/browse/YARN-2984
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.7.0
>
> Attachments: yarn-2984-1.patch, yarn-2984-2.patch, yarn-2984-3.patch, 
> yarn-2984-prelim.patch
>
>
> It would be nice to capture resource usage per container, for a variety of 
> reasons. This JIRA is to track memory usage. 
> YARN-2965 tracks the resource usage on the node, and the two implementations 
> should reuse code as much as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2815) Remove jline from hadoop-yarn-server-common

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281373#comment-14281373
 ] 

Hudson commented on YARN-2815:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2008 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2008/])
YARN-2815. Excluded transitive dependency of JLine in 
hadoop-yarn-server-common. Contributed by Ferdinand Xu. (zjshen: rev 
43302f6f44f97d67069eefdda986b6da2933393e)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml


> Remove jline from hadoop-yarn-server-common
> ---
>
> Key: YARN-2815
> URL: https://issues.apache.org/jira/browse/YARN-2815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Fix For: 2.7.0
>
> Attachments: YARN-2815.patch
>
>
> hadoop-yarn-server-common bundles ancient jline-0.9.94 as transitive 
> dependency of zookeeper-3.4.6 (it is used in the ZK CLI and not in Hadoop). 
> Beeline is moving to JLine2 and have to exclude this jline dependency so that 
> the hadoop classpath can use their own version (there is a newer JLine 2.12 
> that contains history search etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2984) Metrics for container's actual memory usage

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281369#comment-14281369
 ] 

Hudson commented on YARN-2984:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #73 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/73/])
YARN-2984. Metrics for container's actual memory usage. (kasha) (kasha: rev 
84198564ba6028d51c1fcf9cdcb87f6ae6e08513)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainerMetrics.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsCollectorImpl.java


> Metrics for container's actual memory usage
> ---
>
> Key: YARN-2984
> URL: https://issues.apache.org/jira/browse/YARN-2984
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.7.0
>
> Attachments: yarn-2984-1.patch, yarn-2984-2.patch, yarn-2984-3.patch, 
> yarn-2984-prelim.patch
>
>
> It would be nice to capture resource usage per container, for a variety of 
> reasons. This JIRA is to track memory usage. 
> YARN-2965 tracks the resource usage on the node, and the two implementations 
> should reuse code as much as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3067) Read-only REST API for System Configuration

2015-01-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281327#comment-14281327
 ] 

Steve Loughran commented on YARN-3067:
--

Doug: I agree this wold be good.

Furthermore, /conf isn't guaranteed to be useful by clients, as it can include 
things like 0 ports & wildcarded hosts which are bonded at startup time.

For Slider we have written a REST API which [serves up 
configurations|https://github.com/apache/incubator-slider/blob/b70d830aee6fc0171cb36fff0604b310dc565e3e/slider-core/src/main/java/org/apache/slider/server/appmaster/web/rest/publisher/PublisherResource.java]
 -of both the AM and of applications we deploy (e.g hbase-site.xml)

Some aspects of it we like
# depending on the extension of the URL: .xml. json or .properties, the content 
comes back in the appropriate format
# you can ask for a single specific property by appending it to the file, 
getting it back or a 404.
# It registers itself in the YARN registry, so you can find it. It is designed 
to be reusable


But
* While it serves up the dynamic values of deployed apps, I think it still 
doesn't include any dynamically resolved ports/hosts used to deploy the 
AM..that's something I'd like fixed.
* You may want to specify a filter for secrets like s3 credentials

For a YARN r/o API, I'd like to see the same feature set, and have something 
standardised enough that Slider (and other things, like HBase, HDFS, ...) could 
also implement if they chose to serve up their configs the same way —and we'd 
have a reference implementation for them to use.

If you want to take what we've done and use as the basis for a YARN/re-usable 
API, I'll help.




> Read-only REST API for System Configuration
> ---
>
> Key: YARN-3067
> URL: https://issues.apache.org/jira/browse/YARN-3067
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api, client
>Affects Versions: 2.6.0
> Environment: REST API
>Reporter: Doug Haigh
>
> While the name node exposes its configuration via :/conf 
> and resource manager exposes its configuration via :/conf, 
> neither provide a complete picture of the system's configuration for 
> applications that use the YARN REST API.
> This JIRA is to request a REST API to get all services' configuration 
> information whether it is stored in the resource manager or something 
> external like zookeeper. Essentially this would return information similar to 
> what /conf and /conf return, but be guaranteed to have all 
> information so that a REST API user would not require Hadoop configuration 
> files to know how services are setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3067) Read-only REST API for System Configuration

2015-01-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-3067:
-
Summary: Read-only REST API for System Configuration  (was: REST API for 
System Configuration)

> Read-only REST API for System Configuration
> ---
>
> Key: YARN-3067
> URL: https://issues.apache.org/jira/browse/YARN-3067
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: api, client
>Affects Versions: 2.6.0
> Environment: REST API
>Reporter: Doug Haigh
>
> While the name node exposes its configuration via :/conf 
> and resource manager exposes its configuration via :/conf, 
> neither provide a complete picture of the system's configuration for 
> applications that use the YARN REST API.
> This JIRA is to request a REST API to get all services' configuration 
> information whether it is stored in the resource manager or something 
> external like zookeeper. Essentially this would return information similar to 
> what /conf and /conf return, but be guaranteed to have all 
> information so that a REST API user would not require Hadoop configuration 
> files to know how services are setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3068) Support secure HTTP communications between RM proxy and AM web endpoint

2015-01-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281320#comment-14281320
 ] 

Steve Loughran commented on YARN-3068:
--

Jon, I think you meant to say "the RM will create a shared secret"

Note that this feature will also catch the dev-time situation where, if your 
test code is running on the same host as the RM, you can talk direct to the AM 
without being 302'd to the proxy. That is, it enforces the same HTTP proxy 
chain in standalone & minicluster as in production

> Support secure HTTP communications between RM proxy and AM web endpoint
> ---
>
> Key: YARN-3068
> URL: https://issues.apache.org/jira/browse/YARN-3068
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, resourcemanager
>Reporter: Jonathan Maron
>
> When exposing a web endpoint for UI and REST, an AM is dependent on the RM as 
> a proxy for incoming interactions.  The RM web proxy supports security 
> features such as SSL and SPNEGO.  However, those security mechanisms are not 
> supported by the AM, and supporting them directly at the AM would require 
> some complex implementation details and configuration (not to mention that 
> given the proxying relationship they may be considered somewhat redundant).
> In order to ensure that there is a measure of security (trust) between the RM 
> web proxy and the AM, the following mechanism is suggested:
> - The AM will create a shared secret and propagate it to the AM during AM 
> launch (e.g. it could be part of the existing credentials).
> - The web proxy will leverage the shared secret to encrypt an agreed upon 
> text (e.g. the container ID) and an associated expiry time (to mitigate 
> potential request spoofing).
> - The AM will decrypt the text leveraging the shared secret and, if 
> successful and the expiry time has not been reached, proceed with the request 
> processing (probably appropriate to perform these checks in the existing 
> AmIpFilter or a specific trust filter).
> Note that this feature is key to supporting interactions between Knox and AM 
> REST resources, since those interactions depend on trusted proxy support the 
> RM can provide (via its current SPNEGO and "doAs" support), allowing AM's to 
> focus on performing their processing based on the established doAs identity 
> (established at the RM and related to the AM via a trusted path).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3068) Support secure HTTP communications between RM proxy and AM web endpoint

2015-01-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-3068:
-
Summary: Support secure HTTP communications between RM proxy and AM web 
endpoint  (was: shared secret for trust between RM and AM)

> Support secure HTTP communications between RM proxy and AM web endpoint
> ---
>
> Key: YARN-3068
> URL: https://issues.apache.org/jira/browse/YARN-3068
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: applications, resourcemanager
>Reporter: Jonathan Maron
>
> When exposing a web endpoint for UI and REST, an AM is dependent on the RM as 
> a proxy for incoming interactions.  The RM web proxy supports security 
> features such as SSL and SPNEGO.  However, those security mechanisms are not 
> supported by the AM, and supporting them directly at the AM would require 
> some complex implementation details and configuration (not to mention that 
> given the proxying relationship they may be considered somewhat redundant).
> In order to ensure that there is a measure of security (trust) between the RM 
> web proxy and the AM, the following mechanism is suggested:
> - The AM will create a shared secret and propagate it to the AM during AM 
> launch (e.g. it could be part of the existing credentials).
> - The web proxy will leverage the shared secret to encrypt an agreed upon 
> text (e.g. the container ID) and an associated expiry time (to mitigate 
> potential request spoofing).
> - The AM will decrypt the text leveraging the shared secret and, if 
> successful and the expiry time has not been reached, proceed with the request 
> processing (probably appropriate to perform these checks in the existing 
> AmIpFilter or a specific trust filter).
> Note that this feature is key to supporting interactions between Knox and AM 
> REST resources, since those interactions depend on trusted proxy support the 
> RM can provide (via its current SPNEGO and "doAs" support), allowing AM's to 
> focus on performing their processing based on the established doAs identity 
> (established at the RM and related to the AM via a trusted path).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2984) Metrics for container's actual memory usage

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281310#comment-14281310
 ] 

Hudson commented on YARN-2984:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #810 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/810/])
YARN-2984. Metrics for container's actual memory usage. (kasha) (kasha: rev 
84198564ba6028d51c1fcf9cdcb87f6ae6e08513)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainerMetrics.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsCollectorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java


> Metrics for container's actual memory usage
> ---
>
> Key: YARN-2984
> URL: https://issues.apache.org/jira/browse/YARN-2984
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.7.0
>
> Attachments: yarn-2984-1.patch, yarn-2984-2.patch, yarn-2984-3.patch, 
> yarn-2984-prelim.patch
>
>
> It would be nice to capture resource usage per container, for a variety of 
> reasons. This JIRA is to track memory usage. 
> YARN-2965 tracks the resource usage on the node, and the two implementations 
> should reuse code as much as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2815) Remove jline from hadoop-yarn-server-common

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281309#comment-14281309
 ] 

Hudson commented on YARN-2815:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #810 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/810/])
YARN-2815. Excluded transitive dependency of JLine in 
hadoop-yarn-server-common. Contributed by Ferdinand Xu. (zjshen: rev 
43302f6f44f97d67069eefdda986b6da2933393e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* hadoop-yarn-project/CHANGES.txt


> Remove jline from hadoop-yarn-server-common
> ---
>
> Key: YARN-2815
> URL: https://issues.apache.org/jira/browse/YARN-2815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Fix For: 2.7.0
>
> Attachments: YARN-2815.patch
>
>
> hadoop-yarn-server-common bundles ancient jline-0.9.94 as transitive 
> dependency of zookeeper-3.4.6 (it is used in the ZK CLI and not in Hadoop). 
> Beeline is moving to JLine2 and have to exclude this jline dependency so that 
> the hadoop classpath can use their own version (there is a newer JLine 2.12 
> that contains history search etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2815) Remove jline from hadoop-yarn-server-common

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281303#comment-14281303
 ] 

Hudson commented on YARN-2815:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #76 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/76/])
YARN-2815. Excluded transitive dependency of JLine in 
hadoop-yarn-server-common. Contributed by Ferdinand Xu. (zjshen: rev 
43302f6f44f97d67069eefdda986b6da2933393e)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml


> Remove jline from hadoop-yarn-server-common
> ---
>
> Key: YARN-2815
> URL: https://issues.apache.org/jira/browse/YARN-2815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Fix For: 2.7.0
>
> Attachments: YARN-2815.patch
>
>
> hadoop-yarn-server-common bundles ancient jline-0.9.94 as transitive 
> dependency of zookeeper-3.4.6 (it is used in the ZK CLI and not in Hadoop). 
> Beeline is moving to JLine2 and have to exclude this jline dependency so that 
> the hadoop classpath can use their own version (there is a newer JLine 2.12 
> that contains history search etc.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2984) Metrics for container's actual memory usage

2015-01-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281304#comment-14281304
 ] 

Hudson commented on YARN-2984:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #76 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/76/])
YARN-2984. Metrics for container's actual memory usage. (kasha) (kasha: rev 
84198564ba6028d51c1fcf9cdcb87f6ae6e08513)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainerMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainerMetrics.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsCollectorImpl.java


> Metrics for container's actual memory usage
> ---
>
> Key: YARN-2984
> URL: https://issues.apache.org/jira/browse/YARN-2984
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.7.0
>
> Attachments: yarn-2984-1.patch, yarn-2984-2.patch, yarn-2984-3.patch, 
> yarn-2984-prelim.patch
>
>
> It would be nice to capture resource usage per container, for a variety of 
> reasons. This JIRA is to track memory usage. 
> YARN-2965 tracks the resource usage on the node, and the two implementations 
> should reuse code as much as possible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1021) Yarn Scheduler Load Simulator

2015-01-17 Thread Fabio Colzada (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281276#comment-14281276
 ] 

Fabio Colzada commented on YARN-1021:
-

Hi, I am working with sls on Hadoop 2.6.0, I really need it but I'm struggling 
to have it running at its best. Simulation completes on command line and I can 
see the log entries on the screen, but:

1- the web interface is not working. On screen I can see this exception more or 
less at the beginning of the workflow:
java.lang.NullPointerException
at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:86)
at 
org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.initMetrics(ResourceSchedulerWrapper.java:477)
at 
org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.setConf(ResourceSchedulerWrapper.java:176)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createScheduler
(ResourceManager.java:291)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit
(ResourceManager.java:484)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices
(ResourceManager.java:989)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit
(ResourceManager.java:255)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.sls.SLSRunner.startRM(SLSRunner.java:167)
at org.apache.hadoop.yarn.sls.SLSRunner.start(SLSRunner.java:141)
at org.apache.hadoop.yarn.sls.SLSRunner.main(SLSRunner.java:528)

Not sure which object is null, but I see the folder sls/html has the expected 
files.

2- I don't get the files realtimetrack.json nor jobruntime.csv, while metrics 
folder is correctly populated. I see some recurring exceptions, I don't know if 
they are related since they don't prevent the simulation to terminate:
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.addAMRuntime
(ResourceSchedulerWrapper.java:735)
at 
org.apache.hadoop.yarn.sls.appmaster.AMSimulator.lastStep(AMSimulator.java:193)
at 
org.apache.hadoop.yarn.sls.appmaster.MRAMSimulator.lastStep(MRAMSimulator.java:396)
at 
org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:100)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

and also

Exception in thread "pool-5-thread-374" java.lang.NullPointerException
at 
org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:104)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Any help is really appreciated

> Yarn Scheduler Load Simulator
> -
>
> Key: YARN-1021
> URL: https://issues.apache.org/jira/browse/YARN-1021
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: Wei Yan
>Assignee: Wei Yan
> Fix For: 2.3.0
>
> Attachments: YARN-1021-demo.tar.gz, YARN-1021-images.tar.gz, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, 
> YARN-1021.patch, YARN-1021.patch, YARN-1021.patch, YARN-1021.pdf
>
>
> The Yarn Scheduler is a fertile area of interest with different 
> implementations, e.g., Fifo, Capacity and Fair  schedulers. Meanwhile, 
> several optimizations are also made to improve scheduler performance for 
> different scenarios and workload. Each scheduler algorithm has its own set of 
> features, and drives scheduling decisions by many factors, such as fairness, 
> capacity guarantee, resource availability, etc. It is very important to 
> evaluate a scheduler algorithm very well before we deploy it in a production 
> cluster. Unfortunately, currently it is non-trivial to evaluate a scheduling 
> algorithm. Evaluating in a real cluster is always time and cost consuming, 
> and it is also very hard to find a large-enough cluster. Hence, a simulator 
> which can predict how well a scheduler algorithm for some specific workload 
> would be quite useful.
> We want to