[jira] [Updated] (YARN-145) Add a Web UI to the fair share scheduler

2012-10-11 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated YARN-145:


Attachment: YARN-145.patch

 Add a Web UI to the fair share scheduler
 

 Key: YARN-145
 URL: https://issues.apache.org/jira/browse/YARN-145
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 2.0.1-alpha
Reporter: Sandy Ryza
 Attachments: YARN-145.patch


 The fair scheduler had a UI in MR1.  Port the capacity scheduler web UI and 
 modify appropriately for the fair share scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-146) Add unit tests for computing fair share in the fair scheduler

2012-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473988#comment-13473988
 ] 

Hadoop QA commented on YARN-146:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548589/YARN-146-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/85//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/85//console

This message is automatically generated.

 Add unit tests for computing fair share in the fair scheduler
 -

 Key: YARN-146
 URL: https://issues.apache.org/jira/browse/YARN-146
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.0.2-alpha

 Attachments: YARN-146-1.patch, YARN-146.patch


 MR1 had TestComputeFairShares.  This should go into the YARN fair scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-145) Add a Web UI to the fair share scheduler

2012-10-11 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474035#comment-13474035
 ] 

Tom White commented on YARN-145:


+1 I ran a single node cluster and monitored the fair scheduler page while 
running a job and it looked correct.

 Add a Web UI to the fair share scheduler
 

 Key: YARN-145
 URL: https://issues.apache.org/jira/browse/YARN-145
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 2.0.1-alpha
Reporter: Sandy Ryza
 Attachments: YARN-145.patch


 The fair scheduler had a UI in MR1.  Port the capacity scheduler web UI and 
 modify appropriately for the fair share scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-146) Add unit tests for computing fair share in the fair scheduler

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474043#comment-13474043
 ] 

Hudson commented on YARN-146:
-

Integrated in Hadoop-Common-trunk-Commit #2845 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2845/])
YARN-146. Add unit tests for computing fair share in the fair scheduler. 
Contributed by Sandy Ryza. (Revision 1396972)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396972
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestComputeFairShares.java


 Add unit tests for computing fair share in the fair scheduler
 -

 Key: YARN-146
 URL: https://issues.apache.org/jira/browse/YARN-146
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: YARN-146-1.patch, YARN-146.patch


 MR1 had TestComputeFairShares.  This should go into the YARN fair scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-146) Add unit tests for computing fair share in the fair scheduler

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474120#comment-13474120
 ] 

Hudson commented on YARN-146:
-

Integrated in Hadoop-Hdfs-trunk #1192 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1192/])
YARN-146. Add unit tests for computing fair share in the fair scheduler. 
Contributed by Sandy Ryza. (Revision 1396972)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396972
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestComputeFairShares.java


 Add unit tests for computing fair share in the fair scheduler
 -

 Key: YARN-146
 URL: https://issues.apache.org/jira/browse/YARN-146
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: YARN-146-1.patch, YARN-146.patch


 MR1 had TestComputeFairShares.  This should go into the YARN fair scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-146) Add unit tests for computing fair share in the fair scheduler

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474136#comment-13474136
 ] 

Hudson commented on YARN-146:
-

Integrated in Hadoop-Mapreduce-trunk #1223 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1223/])
YARN-146. Add unit tests for computing fair share in the fair scheduler. 
Contributed by Sandy Ryza. (Revision 1396972)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396972
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestComputeFairShares.java


 Add unit tests for computing fair share in the fair scheduler
 -

 Key: YARN-146
 URL: https://issues.apache.org/jira/browse/YARN-146
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Affects Versions: 2.0.2-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
 Fix For: 2.0.3-alpha

 Attachments: YARN-146-1.patch, YARN-146.patch


 MR1 had TestComputeFairShares.  This should go into the YARN fair scheduler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-155) TestAppManager fails on jdk7

2012-10-11 Thread Thomas Graves (JIRA)
Thomas Graves created YARN-155:
--

 Summary: TestAppManager fails on jdk7
 Key: YARN-155
 URL: https://issues.apache.org/jira/browse/YARN-155
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Thomas Graves


Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.851 sec  
FAILURE!
testRMAppSubmit(org.apache.hadoop.yarn.server.resourcemanager.TestAppManager)  
Time elapsed: 0.017 sec   FAILURE!
junit.framework.AssertionFailedError: app event type is wrong before 
expected:KILL but was:APP_REJECTED
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:283)
at junit.framework.Assert.assertEquals(Assert.java:64)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestAppManager.setupDispatcher(TestAppManager.java:329)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestAppManager.testRMAppSubmit(TestAppManager.java:354)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)


I can reproduce the failure on jdk6 if I move the testRMAppSubmit to the bottom 
of the file, so my initial hunch is this is due to test order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-155) TestAppManager intermittently fails with jdk7

2012-10-11 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated YARN-155:
---

Affects Version/s: 2.0.3-alpha
   3.0.0

 TestAppManager intermittently fails with jdk7
 -

 Key: YARN-155
 URL: https://issues.apache.org/jira/browse/YARN-155
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 0.23.3, 3.0.0, 2.0.3-alpha
Reporter: Thomas Graves
  Labels: java7

 Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.851 sec  
 FAILURE!
 testRMAppSubmit(org.apache.hadoop.yarn.server.resourcemanager.TestAppManager) 
  Time elapsed: 0.017 sec   FAILURE!
 junit.framework.AssertionFailedError: app event type is wrong before 
 expected:KILL but was:APP_REJECTED
 at junit.framework.Assert.fail(Assert.java:47)
 at junit.framework.Assert.failNotEquals(Assert.java:283)
 at junit.framework.Assert.assertEquals(Assert.java:64)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.TestAppManager.setupDispatcher(TestAppManager.java:329)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.TestAppManager.testRMAppSubmit(TestAppManager.java:354)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 I can reproduce the failure on jdk6 if I move the testRMAppSubmit to the 
 bottom of the file, so my initial hunch is this is due to test order.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-153) PaaS on YARN: an YARN application to demonstrate that YARN can be used as a PaaS

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-153:
---

Fix Version/s: (was: 3.0.0)
   2.0.3-alpha

 PaaS on YARN: an YARN application to demonstrate that YARN can be used as a 
 PaaS
 

 Key: YARN-153
 URL: https://issues.apache.org/jira/browse/YARN-153
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Jaigak Song
Assignee: Jaigak Song
 Fix For: 2.0.3-alpha

 Attachments: HADOOPasPAAS_Architecture.pdf, MAPREDUCE-4393.patch, 
 MAPREDUCE-4393.patch, MAPREDUCE-4393.patch, MAPREDUCE4393.patch, 
 MAPREDUCE4393.patch

   Original Estimate: 336h
  Time Spent: 336h
  Remaining Estimate: 0h

 This application is to demonstrate that YARN can be used for non-mapreduce 
 applications. As Hadoop has already been adopted and deployed widely and its 
 deployment in future will be highly increased, we thought that it's a good 
 potential to be used as PaaS.  
 I have implemented a proof of concept to demonstrate that YARN can be used as 
 a PaaS (Platform as a Service). I have done a gap analysis against VMware's 
 Cloud Foundry and tried to achieve as many PaaS functionalities as possible 
 on YARN.
 I'd like to check in this POC as a YARN example application.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-2) Enhance CS to schedule accounting for both memory and cpu cores

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated YARN-2:
-

Fix Version/s: (was: 2.0.2-alpha)
   2.0.3-alpha

 Enhance CS to schedule accounting for both memory and cpu cores
 ---

 Key: YARN-2
 URL: https://issues.apache.org/jira/browse/YARN-2
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: capacityscheduler, scheduler
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Fix For: 2.0.3-alpha

 Attachments: MAPREDUCE-4327.patch, MAPREDUCE-4327.patch, 
 MAPREDUCE-4327.patch, MAPREDUCE-4327-v2.patch, MAPREDUCE-4327-v3.patch, 
 MAPREDUCE-4327-v4.patch, MAPREDUCE-4327-v5.patch, YARN-2-help.patch, 
 YARN-2.patch, YARN-2.patch


 With YARN being a general purpose system, it would be useful for several 
 applications (MPI et al) to specify not just memory but also CPU (cores) for 
 their resource requirements. Thus, it would be useful to the 
 CapacityScheduler to account for both.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-79) Calling YarnClientImpl.close throws Exception

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-79?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-79.
-


 Calling YarnClientImpl.close throws Exception
 -

 Key: YARN-79
 URL: https://issues.apache.org/jira/browse/YARN-79
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.0.0-alpha
Reporter: Bikas Saha
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.2-alpha

 Attachments: YARN-79-20120904.txt


 The following exception is thrown
 ===
 *org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy - is 
 not Closeable or does not provide closeable invocation handler class 
 org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl*
   *at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:624)*
   *at org.hadoop.yarn.client.YarnClientImpl.stop(YarnClientImpl.java:102)*
   at 
 org.apache.hadoop.yarn.applications.unmanagedamlauncher.UnmanagedAMLauncher.run(UnmanagedAMLauncher.java:336)
   at 
 org.apache.hadoop.yarn.applications.unmanagedamlauncher.TestUnmanagedAMLauncher.testDSShell(TestUnmanagedAMLauncher.java:156)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
 ===

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-31) TestDelegationTokenRenewer fails on jdk7

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-31.
-


 TestDelegationTokenRenewer fails on jdk7
 

 Key: YARN-31
 URL: https://issues.apache.org/jira/browse/YARN-31
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.2-alpha, 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
  Labels: java7
 Fix For: 2.0.2-alpha, 0.23.3

 Attachments: YARN-31.patch


 TestDelegationTokenRenewer fails when run with jdk7.  
 With JDK7, test methods run in an undefined order. Here it is expecting that 
 testDTRenewal runs first but it no longer is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-108) FSDownload can create cache directories with the wrong permissions

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-108.
--


 FSDownload can create cache directories with the wrong permissions
 --

 Key: YARN-108
 URL: https://issues.apache.org/jira/browse/YARN-108
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Fix For: 2.0.2-alpha, 0.23.4

 Attachments: YARN-108.patch


 When the cluster is configured with a restrictive umask, e.g.: 
 {{fs.permissions.umask-mode=0077}}, the nodemanager can end up creating 
 directory entries in the public cache with the wrong permissions.  The 
 permissions can end up where only the nodemanager user can access files in 
 the public cache, preventing jobs from running properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-14) Symlinks to peer distributed cache files no longer work

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-14?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-14.
-


 Symlinks to peer distributed cache files no longer work
 ---

 Key: YARN-14
 URL: https://issues.apache.org/jira/browse/YARN-14
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Fix For: 2.0.2-alpha, 0.23.3

 Attachments: MAPREDUCE-4514.patch, YARN-14.patch


 Trying to create a symlink to another file that is specified for the 
 distributed cache will fail to create the link.  For example:
 hadoop jar ... -files x,y,x#z
 will localize the files x and y as x and y, but the z symlink for x will not 
 be created.  This is a regression from 1.x behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-63) RMNodeImpl is missing valid transitions from the UNHEALTHY state

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-63?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-63.
-


 RMNodeImpl is missing valid transitions from the UNHEALTHY state
 

 Key: YARN-63
 URL: https://issues.apache.org/jira/browse/YARN-63
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Fix For: 2.0.2-alpha, 0.23.3

 Attachments: YARN-63-branch-0.23.patch, YARN-63.patch


 The ResourceManager isn't properly handling nodes that have been marked 
 UNHEALTHY when they are lost or are decommissioned.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-12) Several Findbugs issues with new FairScheduler in YARN

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-12?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-12.
-


 Several Findbugs issues with new FairScheduler in YARN
 --

 Key: YARN-12
 URL: https://issues.apache.org/jira/browse/YARN-12
 Project: Hadoop YARN
  Issue Type: Bug
  Components: scheduler
Affects Versions: 2.0.0-alpha
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 2.0.2-alpha

 Attachments: MAPREDUCE-4452.patch, MAPREDUCE-4452-v1.patch, 
 MAPREDUCE-4452-v2.patch, MAPREDUCE-4452-v3.patch, YARN-12.patch


 The good feature of FairScheduler is added recently to YARN. As recently 
 PreCommit test from MAPREDUCE-4309, there are several bugs found by Findbugs 
 related to FairScheduler:
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedulerEventLog.shutdown()
  might ignore java.lang.Exception
 Inconsistent synchronization of 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedulerEventLog.logDisabled;
  locked 50% of time
 Inconsistent synchronization of 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.queueMaxAppsDefault;
  locked 50% of time
 Inconsistent synchronization of 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueueManager.userMaxAppsDefault;
  locked 50% of time
 The details are 
 in:https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/2612//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html#DE_MIGHT_IGNORE
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-22) Using URI for yarn.nodemanager log dirs fails

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-22?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-22.
-


 Using URI for yarn.nodemanager log dirs fails
 -

 Key: YARN-22
 URL: https://issues.apache.org/jira/browse/YARN-22
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Eli Collins
Assignee: Mayank Bansal
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: MAPREDUCE-4466-trunk-v1.patch, 
 MAPREDUCE-4466-trunk-v2.patch, MAPREDUCE-4466-trunk-v3.patch, 
 MAPREDUCE-4466-trunk-v4.patch, YARN-22-trunk-v5.patch


 If I use URIs (eg file:///home/eli/hadoop/dirs) for yarn.nodemanager.log-dirs 
 or yarn.nodemanager.remote-app-log-dir the container log servlet fails with 
 an NPE (works if I remove the file scheme). Using a URI for 
 yarn.nodemanager.local-dirs works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-88) DefaultContainerExecutor can fail to set proper permissions

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-88?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-88.
-


 DefaultContainerExecutor can fail to set proper permissions
 ---

 Key: YARN-88
 URL: https://issues.apache.org/jira/browse/YARN-88
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Fix For: 2.0.2-alpha, 0.23.4

 Attachments: YARN-88.patch, YARN-88.patch


 {{DefaultContainerExecutor}} can fail to set the proper permissions on its 
 local directories if the cluster has been configured with a restrictive 
 umask, e.g.: fs.permissions.umask-mode=0077.  The configured umask ends up 
 defeating the permissions requested by {{DefaultContainerExecutor}} when it 
 creates directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-10) dist-shell shouldn't have a (test) dependency on hadoop-mapreduce-client-core

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-10?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-10.
-


 dist-shell shouldn't have a (test) dependency on hadoop-mapreduce-client-core
 -

 Key: YARN-10
 URL: https://issues.apache.org/jira/browse/YARN-10
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.2-alpha
Reporter: Arun C Murthy
Assignee: Hitesh Shah
 Fix For: 2.0.2-alpha

 Attachments: YARN-10.1.patch


 dist-shell shouldn't have a (test) dependency on 
 hadoop-mapreduce-client-core, this should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-83) Change package of YarnClient to include apache

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-83?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-83.
-


 Change package of YarnClient to include apache
 --

 Key: YARN-83
 URL: https://issues.apache.org/jira/browse/YARN-83
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 2.0.2-alpha

 Attachments: YARN-83.1.patch, YARN-83.2.patch, 
 YARN-83.3.combined.patch, YARN-83.3.combined.patch, YARN-83.3.MR.patch, 
 YARN-83.3.YARN.patch, YARN-83.3.YARN.patch


 Currently its org.hadoop.* instead of org.apache.hadoop.*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-13) Merge of yarn reorg into branch-2 copied trunk tree

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-13?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-13.
-


 Merge of yarn reorg into branch-2 copied trunk tree
 ---

 Key: YARN-13
 URL: https://issues.apache.org/jira/browse/YARN-13
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Priority: Critical
 Fix For: 2.0.2-alpha

 Attachments: yarn-13.txt, yarn-13.txt


 When the move of yarn from inside MR to the project root was merged into 
 branch-2, it seems like the trunk code base was actually copied into the 
 branch-2 branch, instead of a parallel move occurring. So, the poms in 
 branch-2 show the version as 3.0.0-SNAPSHOT instead of a 2.x snapshot 
 version. This is breaking the branch-2 build.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-36) branch-2.1.0-alpha doesn't build

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-36?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-36.
-


 branch-2.1.0-alpha doesn't build
 

 Key: YARN-36
 URL: https://issues.apache.org/jira/browse/YARN-36
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Radim Kolar
Priority: Blocker
 Fix For: 2.0.2-alpha

 Attachments: missing-versions.txt


 branch-2.1.0-alpha doesn't build due to the following. Per YARN-1 I updated 
 the mvn version to be 2.1.0-SNAPSHOT, before I hit this issue it didn't 
 compile due to the bogus version. 
 {noformat}
 hadoop-branch-2.1.0-alpha $ mvn compile
 [INFO] Scanning for projects...
 [ERROR] The build could not read 1 project - [Help 1]
 [ERROR]   
 [ERROR]   The project org.apache.hadoop:hadoop-yarn-project:2.1.0-SNAPSHOT 
 (/home/eli/src/hadoop-branch-2.1.0-alpha/hadoop-yarn-project/pom.xml) has 1 
 error
 [ERROR] 'dependencies.dependency.version' for org.hsqldb:hsqldb:jar is 
 missing. @ line 160, column 17
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-80) Support delay scheduling for node locality in MR2's capacity scheduler

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-80?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-80.
-


 Support delay scheduling for node locality in MR2's capacity scheduler
 --

 Key: YARN-80
 URL: https://issues.apache.org/jira/browse/YARN-80
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Reporter: Todd Lipcon
Assignee: Arun C Murthy
 Fix For: 2.0.2-alpha

 Attachments: YARN-80.patch, YARN-80.patch


 The capacity scheduler in MR2 doesn't support delay scheduling for achieving 
 node-level locality. So, jobs exhibit poor data locality even if they have 
 good rack locality. Especially on clusters where disk throughput is much 
 better than network capacity, this hurts overall job performance. We should 
 optionally support node-level delay scheduling heuristics similar to what the 
 fair scheduler implements in MR1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-68) NodeManager will refuse to shutdown indefinitely due to container log aggregation

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-68?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-68.
-


 NodeManager will refuse to shutdown indefinitely due to container log 
 aggregation
 -

 Key: YARN-68
 URL: https://issues.apache.org/jira/browse/YARN-68
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3
 Environment: QE
Reporter: patrick white
Assignee: Daryn Sharp
 Fix For: 2.0.2-alpha, 0.23.3

 Attachments: YARN-68-1.patch, YARN-68.patch


 The nodemanager is able to get into a state where 
 containermanager.logaggregation.AppLogAggregatorImpl will apparently wait
 indefinitely for log aggregation to complete for an application, even if that 
 application has abnormally terminated and is no longer present. 
 Observed behavior is that an attempt to stop the nodemanager daemon will 
 return but have no effect, the nm log continually displays messages similar 
 to this:
 [Thread-1]2012-08-21 17:44:07,581 INFO
 org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl:
 Waiting for aggregation to complete for application_1345221477405_2733
 The only recovery we found to work was to 'kill -9' the nm process.
 What exactly causes the NM to enter this state is unclear but we do see this 
 behavior reliably when the NM has run a task which failed, for example when 
 debugging oozie distcp actions and having a distcp map task fail, the NM that 
 was running the container will now enter this state where a shutdown on said 
 NM will never complete, 'never' in this case was waiting for 2 hours before 
 killing the nodemanager process.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-42) Node Manager throws NPE on startup

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-42?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-42.
-


 Node Manager throws NPE on startup
 --

 Key: YARN-42
 URL: https://issues.apache.org/jira/browse/YARN-42
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.0-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 2.0.2-alpha, 0.23.4

 Attachments: MAPREDUCE-4254.patch, YARN-42.txt


 NM throws NPE on startup if it doesn't have persmission's on nm local dir's
 {code:xml}
 2012-05-14 16:32:13,468 FATAL 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
 NodeManager
 org.apache.hadoop.yarn.YarnException: Failed to initialize LocalizationService
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.init(ResourceLocalizationService.java:202)
   at 
 org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.init(ContainerManagerImpl.java:183)
   at 
 org.apache.hadoop.yarn.service.CompositeService.init(CompositeService.java:58)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.init(NodeManager.java:166)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:268)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:284)
 Caused by: java.io.IOException: mkdir of /mrv2/tmp/nm-local-dir/usercache 
 failed
   at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:907)
   at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
   at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
   at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
   at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
   at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
   at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.init(ResourceLocalizationService.java:188)
   ... 6 more
 2012-05-14 16:32:13,472 INFO org.apache.hadoop.yarn.service.CompositeService: 
 Error stopping 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler
 java.lang.NullPointerException
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler.stop(NonAggregatingLogHandler.java:82)
   at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
   at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.stop(ContainerManagerImpl.java:266)
   at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)
   at 
 org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)
   at 
 org.apache.hadoop.yarn.server.nodemanager.NodeManager.stop(NodeManager.java:182)
   at 
 org.apache.hadoop.yarn.service.CompositeService$CompositeServiceShutdownHook.run(CompositeService.java:122)
   at 
 org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-1) Move YARN out of hadoop-mapreduce

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-1.



 Move YARN out of hadoop-mapreduce
 -

 Key: YARN-1
 URL: https://issues.apache.org/jira/browse/YARN-1
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Fix For: 2.0.2-alpha, 0.23.3

 Attachments: YARN-1.patch, YARN-1.patch, YARN-1.sh


 Move YARN out of hadoop-mapreduce-project into hadoop-yarn-project in hadoop 
 trunk

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-106) Nodemanager needs to set permissions of local directories

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-106.
--


 Nodemanager needs to set permissions of local directories
 -

 Key: YARN-106
 URL: https://issues.apache.org/jira/browse/YARN-106
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Fix For: 2.0.2-alpha, 0.23.4, 3.0.0

 Attachments: YARN-106.patch, YARN-106.patch, YARN-106.patch, 
 YARN-106.patch


 If the nodemanager process is running with a restrictive default umask (e.g.: 
 0077) then it will create its local directories with permissions that are too 
 restrictive to allow containers from other users to run.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-37) TestRMAppTransitions.testAppSubmittedKilled passes for the wrong reason

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-37.
-


 TestRMAppTransitions.testAppSubmittedKilled passes for the wrong reason
 ---

 Key: YARN-37
 URL: https://issues.apache.org/jira/browse/YARN-37
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 0.23.3
Reporter: Jason Lowe
Assignee: Mayank Bansal
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: MAPREDUCE-4455-trunk-v1.patch, 
 MAPREDUCE-4455-trunk-v2.patch, MAPREDUCE-4455-YARN-trunk-v2.patch, 
 MAPREDUCE-4455-YARN-trunk-v3.patch, YARN-37-trunk-v4.patch


 TestRMAppTransitions#testAppSubmittedKilled causes an invalid event exception 
 but the test doesn't catch the error since the final app state is still 
 killed.  Killed for the wrong reason, but the final state is the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-138) Improve default config values for YARN

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-138.
--


 Improve default config values for YARN
 --

 Key: YARN-138
 URL: https://issues.apache.org/jira/browse/YARN-138
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, scheduler
Affects Versions: 2.0.0-alpha
Reporter: Arun C Murthy
Assignee: Harsh J
  Labels: performance
 Fix For: 2.0.2-alpha, 0.23.4

 Attachments: MAPREDUCE-4316.patch, YARN-138_branch-0.23.patch, 
 YARN138.txt, YARN138.txt, YARN138.txt


 Currently some of our configs are way off e.g. min-alloc is 128M while 
 max-alloc is 10240.
 This leads to poor out-of-box performance as noticed by some users: 
 http://s.apache.org/avd

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-39) RM-NM secret-keys should be randomly generated and rolled every so often

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-39?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-39.
-


 RM-NM secret-keys should be randomly generated and rolled every so often
 

 Key: YARN-39
 URL: https://issues.apache.org/jira/browse/YARN-39
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
Priority: Critical
 Fix For: 2.0.2-alpha, 0.23.3

 Attachments: MAPREDUCE-3943-20120416.txt, MR3943_branch-23.txt, 
 MR3943_branch-23.txt, MR3943_trunk.txt, MR3943_trunk.txt, MR3943.txt, 
 MR3943.txt, YARN-39-20120823.1.txt, YARN-39-20120823.1.txt, 
 YARN-39-20120823.txt, YARN-39-20120824.txt, YARN39_branch23.txt, YARN39.txt


  - RM should generate the master-key randomly
  - The master-key should roll every so often
  - NM should remember old expired keys so that already doled out 
 container-requests can be satisfied.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-86) DEFAULT_YARN_APPLICATION_CLASSPATH needs to be fixed post YARN-1

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-86?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-86.
-


 DEFAULT_YARN_APPLICATION_CLASSPATH needs to be fixed post YARN-1
 

 Key: YARN-86
 URL: https://issues.apache.org/jira/browse/YARN-86
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Fix For: 2.0.2-alpha

 Attachments: YARN-86.patch


 DEFAULT_YARN_APPLICATION_CLASSPATH needs $YARN_HOME/share/hadoop/yarn/* and 
 $YARN_HOME/share/hadoop/yarn/lib/*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-29) Add a yarn-client module

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-29?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-29.
-


 Add a yarn-client module
 

 Key: YARN-29
 URL: https://issues.apache.org/jira/browse/YARN-29
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.2-alpha

 Attachments: MAPREDUCE-4332-20120621.txt, 
 MAPREDUCE-4332-20120621-with-common-changes.txt, MAPREDUCE-4332-20120622.txt, 
 MAPREDUCE-4332-20120625.txt, YARN-29-20120822.txt, YARN-29-20120823.txt


 I see that we are duplicating (some) code for talking to RM via client API. 
 In this light, a yarn-client module will be useful so that clients of all 
 frameworks can use/extend it.
 And that same module can be the destination for all the YARN's command line 
 tools.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-137) Change the default scheduler to the CapacityScheduler

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-137.
--


 Change the default scheduler to the CapacityScheduler
 -

 Key: YARN-137
 URL: https://issues.apache.org/jira/browse/YARN-137
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Affects Versions: 2.0.0-alpha
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Fix For: 2.0.2-alpha, 0.23.4

 Attachments: MR4335_2.txt, MR4335_3.txt, MR4335_4.txt, MR4335.txt, 
 YARN-137_branch23.patch, YARN-137.patch, YARN137.txt


 There's some bugs in the FifoScheduler atm - doesn't distribute tasks across 
 nodes and some headroom (available resource) issues.
 That's not the best experience for users trying out the 2.0 branch. The CS 
 with the default configuration of a single queue behaves the same as the 
 FifoScheduler and doesn't have these issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (YARN-87) NM ResourceLocalizationService does not set permissions of local cache directories

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-87?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed YARN-87.
-


 NM ResourceLocalizationService does not set permissions of local cache 
 directories
 --

 Key: YARN-87
 URL: https://issues.apache.org/jira/browse/YARN-87
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Fix For: 2.0.2-alpha, 0.23.3

 Attachments: YARN-87.patch


 {{ResourceLocalizationService}} creates a file cache and user cache directory 
 when it starts up but doesn't specify the permissions for them when they are 
 created.  If the cluster configs are set to limit the default permissions 
 (e.g.: fs.permissions.umask-mode=0077 instead of the default 0022), then the 
 cache directories are created with too-restrictive permissions and no jobs 
 are able to run.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-156) WebAppProxyServlet does not support http methods other than GET

2012-10-11 Thread Thomas Weise (JIRA)
Thomas Weise created YARN-156:
-

 Summary: WebAppProxyServlet does not support http methods other 
than GET
 Key: YARN-156
 URL: https://issues.apache.org/jira/browse/YARN-156
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.0-alpha
Reporter: Thomas Weise


Should support all methods so that applications can use it for full web service 
access to master.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-157) The option shell_command and shell_script have conflict

2012-10-11 Thread Li Ming (JIRA)
Li Ming created YARN-157:


 Summary: The option shell_command and shell_script have conflict
 Key: YARN-157
 URL: https://issues.apache.org/jira/browse/YARN-157
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Affects Versions: 2.0.1-alpha
Reporter: Li Ming


The DistributedShell has an option shell_script to let user specify a shell 
script which will be executed in containers. But the issue is that the 
shell_command option is a must, so if both options are set, then every 
container executor will end with exitCode=1. This is because DistributedShell 
executes the shell_command and shell_script together. For example, if 
shell_command is 'date' then the final command to be executed in container is 
date `ExecShellScript.sh`, so the date command will treat the result of 
ExecShellScript.sh as its parameter, then there will be an error. 

To solve this, the DistributedShell should not use the value of shell_command 
option when the shell_script option is set, and the shell_command option also 
should not be mandatory. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira