[jira] [Created] (YARN-225) Proxy Link in RM UI thows NPE in Secure mode

2012-11-19 Thread Devaraj K (JIRA)
Devaraj K created YARN-225:
--

 Summary: Proxy Link in RM UI thows NPE in Secure mode
 Key: YARN-225
 URL: https://issues.apache.org/jira/browse/YARN-225
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.1-alpha
Reporter: Devaraj K
Assignee: Devaraj K


{code:xml}
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.doGet(WebAppProxyServlet.java:241)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:975)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)


{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-204) test coverage for org.apache.hadoop.tools

2012-11-19 Thread Aleksey Gorshkov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500168#comment-13500168
 ] 

Aleksey Gorshkov commented on YARN-204:
---

YARN-204-trunk-a.patch - patch for trunk .
YARN-204-branch-2-a.patch - patch for branch-2 
YARN-204-branch-0.23-a.patch - patch for branch-0.23
OK. I've fixed  testMapCount in TestCopyFiles.  
patches ready for commit

 test coverage for org.apache.hadoop.tools
 -

 Key: YARN-204
 URL: https://issues.apache.org/jira/browse/YARN-204
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications
Reporter: Aleksey Gorshkov
Assignee: Aleksey Gorshkov
 Attachments: YARN-204-branch-0.23-a.patch, 
 YARN-204-branch-0.23.patch, YARN-204-branch-2-a.patch, 
 YARN-204-branch-2.patch, YARN-204-trunk-a.patch, YARN-204-trunk.patch


 Added some tests for org.apache.hadoop.tools

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-72) NM should handle cleaning up containers when it shuts down ( and kill containers from an earlier instance when it comes back up after an unclean shutdown )

2012-11-19 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-72?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500195#comment-13500195
 ] 

Tom White commented on YARN-72:
---

Sandy, this looks like a good start, hooking in the code for container cleanup. 
I would focus on the part to cleanup on shutdown in this patch, and tackle 
cleanup on startup in YARN-73.

As Bikas mentioned there needs to be a timeout on waiting for the containers to 
shutdown. The shutdown process waits for up to 
yarn.nodemanager.process-kill-wait.ms for the PID to appear, then 
yarn.nodemanager.sleep-delay-before-sigkill.ms before sending a SIGKILL signal 
(after a SIGTERM) if the process hasn't died - see 
ContainerLaunch#cleanupContainer. Waiting for a little longer than the sum of 
these durations would be sufficient.

Regarding testing, you could have a test like the one in 
TestContainerLaunch#testDelayedKill to test that containers are correctly 
cleaned up after stopping a NM.

 NM should handle cleaning up containers when it shuts down ( and kill 
 containers from an earlier instance when it comes back up after an unclean 
 shutdown )
 ---

 Key: YARN-72
 URL: https://issues.apache.org/jira/browse/YARN-72
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Hitesh Shah
Assignee: Sandy Ryza
 Attachments: YARN-72.patch


 Ideally, the NM should wait for a limited amount of time when it gets a 
 shutdown signal for existing containers to complete and kill the containers ( 
 if we pick an aggressive approach ) after this time interval. 
 For NMs which come up after an unclean shutdown, the NM should look through 
 its directories for existing container.pids and try and kill an existing 
 containers matching the pids found. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-18) Make locatlity in YARN's container assignment and task scheduling pluggable for other deployment topology

2012-11-19 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500260#comment-13500260
 ] 

Thomas Graves commented on YARN-18:
---

I kicked the pre-commit build.

 Make locatlity in YARN's container assignment and task scheduling pluggable 
 for other deployment topology
 -

 Key: YARN-18
 URL: https://issues.apache.org/jira/browse/YARN-18
 Project: Hadoop YARN
  Issue Type: New Feature
Affects Versions: 2.0.3-alpha
Reporter: Junping Du
Assignee: Junping Du
  Labels: features
 Attachments: 
 HADOOP-8474-ContainerAssignmentTaskScheduling-pluggable.patch, 
 MAPREDUCE-4309.patch, MAPREDUCE-4309-v2.patch, MAPREDUCE-4309-v3.patch, 
 MAPREDUCE-4309-v4.patch, MAPREDUCE-4309-v5.patch, MAPREDUCE-4309-v6.patch, 
 MAPREDUCE-4309-v7.patch, YARN-18.patch, YARN-18-v2.patch


 There are several classes in YARN’s container assignment and task scheduling 
 algorithms that relate to data locality which were updated to give preference 
 to running a container on other locality besides node-local and rack-local 
 (like nodegroup-local). This propose to make these data structure/algorithms 
 pluggable, like: SchedulerNode, RMNodeImpl, etc. The inner class 
 ScheduledRequests was made a package level class to it would be easier to 
 create a subclass, ScheduledRequestsWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-162) nodemanager log aggregation has scaling issues with namenode

2012-11-19 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated YARN-162:
-

Assignee: Siddharth Seth  (was: Vinod Kumar Vavilapalli)

 nodemanager log aggregation has scaling issues with namenode
 

 Key: YARN-162
 URL: https://issues.apache.org/jira/browse/YARN-162
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3
Reporter: Nathan Roberts
Assignee: Siddharth Seth
Priority: Critical
 Attachments: YARN-162.txt, YARN-162_WIP.txt


 Log aggregation causes fd explosion on the namenode. On large clusters this 
 can exhaust FDs to the point where datanodes can't check-in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-18) Make locatlity in YARN's container assignment and task scheduling pluggable for other deployment topology

2012-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-18?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500276#comment-13500276
 ] 

Hadoop QA commented on YARN-18:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12554120/YARN-18-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapreduce.v2.app.TestAMInfos
  
org.apache.hadoop.mapreduce.jobhistory.TestJobHistoryEventHandler
  org.apache.hadoop.mapreduce.v2.app.TestRecovery

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/154//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/154//console

This message is automatically generated.

 Make locatlity in YARN's container assignment and task scheduling pluggable 
 for other deployment topology
 -

 Key: YARN-18
 URL: https://issues.apache.org/jira/browse/YARN-18
 Project: Hadoop YARN
  Issue Type: New Feature
Affects Versions: 2.0.3-alpha
Reporter: Junping Du
Assignee: Junping Du
  Labels: features
 Attachments: 
 HADOOP-8474-ContainerAssignmentTaskScheduling-pluggable.patch, 
 MAPREDUCE-4309.patch, MAPREDUCE-4309-v2.patch, MAPREDUCE-4309-v3.patch, 
 MAPREDUCE-4309-v4.patch, MAPREDUCE-4309-v5.patch, MAPREDUCE-4309-v6.patch, 
 MAPREDUCE-4309-v7.patch, YARN-18.patch, YARN-18-v2.patch


 There are several classes in YARN’s container assignment and task scheduling 
 algorithms that relate to data locality which were updated to give preference 
 to running a container on other locality besides node-local and rack-local 
 (like nodegroup-local). This propose to make these data structure/algorithms 
 pluggable, like: SchedulerNode, RMNodeImpl, etc. The inner class 
 ScheduledRequests was made a package level class to it would be easier to 
 create a subclass, ScheduledRequestsWithNodeGroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-162) nodemanager log aggregation has scaling issues with namenode

2012-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500286#comment-13500286
 ] 

Hadoop QA commented on YARN-162:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12553856/YARN-162.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/155//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/155//console

This message is automatically generated.

 nodemanager log aggregation has scaling issues with namenode
 

 Key: YARN-162
 URL: https://issues.apache.org/jira/browse/YARN-162
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3
Reporter: Nathan Roberts
Assignee: Siddharth Seth
Priority: Critical
 Attachments: YARN-162.txt, YARN-162_WIP.txt


 Log aggregation causes fd explosion on the namenode. On large clusters this 
 can exhaust FDs to the point where datanodes can't check-in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-162) nodemanager log aggregation has scaling issues with namenode

2012-11-19 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500300#comment-13500300
 ] 

Robert Joseph Evans commented on YARN-162:
--

Sid I like the patch.  I have a few minor comments

# there are a few TODOs added into the code.  {code}// TODO This is broken. 
Container ID for the AM may not be 1.{code}, {code}// TODO Should the app 
really fail if log aggregation fails ?{code} and {code}// TODO Send out an 
event to the app. Currently since aggregation failure{code}.  I could not find 
an existing JIRA for the first one so please file one for it.  The other two 
seem to be related to one another.  If you feel strongly that we should not 
fail an application because log aggregation will not work, then please file a 
separate JIRA for that, otherwise the TODOs should just be comments and not 
TODOs.
# I don't really like the name of the new config that was added.  It exposes 
the internal implementation of how we throttle the applications.  I would 
prefer to have it called something like 
yarn.nodemanager.log-aggregation.max-concurrent-apps.  But this is very minor.
# The new config was not added to yarn-default.xml
# This is also very minor. Inside LogAggregationService.stopApp we are wrapping 
a Void callable inside another Void callable.  I would prefer it if we returned 
the original value instead of returning null.

With Jenkin's +1 I am OK with the change, but it is a large enough change that 
I am a bit nervous about pulling this into 0.23.5.  If you are OK with this, I 
will pull in a modified YARN-219 that addresses your comments, and then we can 
pull this into trunk, branch-2, and branch-0.23 (0.23.6)

 nodemanager log aggregation has scaling issues with namenode
 

 Key: YARN-162
 URL: https://issues.apache.org/jira/browse/YARN-162
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3
Reporter: Nathan Roberts
Assignee: Siddharth Seth
Priority: Critical
 Attachments: YARN-162.txt, YARN-162_WIP.txt


 Log aggregation causes fd explosion on the namenode. On large clusters this 
 can exhaust FDs to the point where datanodes can't check-in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-219) NM should aggregate logs when application finishes.

2012-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500323#comment-13500323
 ] 

Hadoop QA commented on YARN-219:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12554176/YARN-219.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

  
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/157//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/157//console

This message is automatically generated.

 NM should aggregate logs when application finishes.
 ---

 Key: YARN-219
 URL: https://issues.apache.org/jira/browse/YARN-219
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: YARN-219.txt, YARN-219.txt


 The NM should only aggregate logs when the application finishes.  This will 
 reduce the load on the NN, especially with respect to lease renewal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-219) NM should aggregate logs when application finishes.

2012-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500332#comment-13500332
 ] 

Hadoop QA commented on YARN-219:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12554179/YARN-219.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/158//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/158//console

This message is automatically generated.

 NM should aggregate logs when application finishes.
 ---

 Key: YARN-219
 URL: https://issues.apache.org/jira/browse/YARN-219
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: YARN-219.txt, YARN-219.txt, YARN-219.txt


 The NM should only aggregate logs when the application finishes.  This will 
 reduce the load on the NN, especially with respect to lease renewal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-219) NM should aggregate logs when application finishes.

2012-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500360#comment-13500360
 ] 

Hudson commented on YARN-219:
-

Integrated in Hadoop-trunk-Commit #3045 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3045/])
YARN-219. NM should aggregate logs when application finishes. (bobby) 
(Revision 1411289)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1411289
Files : 
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java


 NM should aggregate logs when application finishes.
 ---

 Key: YARN-219
 URL: https://issues.apache.org/jira/browse/YARN-219
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 0.23.5
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: YARN-219.txt, YARN-219.txt, YARN-219.txt


 The NM should only aggregate logs when the application finishes.  This will 
 reduce the load on the NN, especially with respect to lease renewal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-223) Change processTree interface to work better with native code

2012-11-19 Thread Radim Kolar (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500432#comment-13500432
 ] 

Radim Kolar commented on YARN-223:
--

problems with old code: if you are using some system resources such as file 
handles in native library implementation of process tree then java can create 
kinda lot of objects of processTree type which are not garbage collected and 
temporary leaks system resources.

If you start chaining references to original object, then you will have 
allocated system resources just once but you will create long chain on objects 
which could not be GC'd by java until all of them are unreferenced at end of 
container life.

I tried both and didn't liked it, after inspection of code calling psTree it 
was discovered that simply updating object will be enough because code do not 
keeps old copy around while creating a new one.

 Change processTree interface to work better with native code
 

 Key: YARN-223
 URL: https://issues.apache.org/jira/browse/YARN-223
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Radim Kolar
Priority: Critical
 Attachments: pstree-update.txt


 Problem is that on every update of processTree new object is required. This 
 is undesired when working with processTree implementation in native code.
 replace ProcessTree.getProcessTree() with updateProcessTree(). No new object 
 allocation is needed and it simplify application code a bit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-128) Resurrect RM Restart

2012-11-19 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500434#comment-13500434
 ] 

Tom White commented on YARN-128:


I had a quick look at the new patches and FileSystemRMStateStore and 
ZKRMStateStore seem to be missing default constructors, which StoreFactory 
needs. You might change the tests to use StoreFactory to construct the store 
instances to test this code path.

 Resurrect RM Restart 
 -

 Key: YARN-128
 URL: https://issues.apache.org/jira/browse/YARN-128
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.0.0-alpha
Reporter: Arun C Murthy
Assignee: Bikas Saha
 Attachments: MR-4343.1.patch, restart-12-11-zkstore.patch, 
 restart-fs-store-11-17.patch, restart-zk-store-11-17.patch, 
 RM-recovery-initial-thoughts.txt, RMRestartPhase1.pdf, 
 YARN-128.full-code.3.patch, YARN-128.full-code-4.patch, 
 YARN-128.new-code-added.3.patch, YARN-128.new-code-added-4.patch, 
 YARN-128.old-code-removed.3.patch, YARN-128.old-code-removed.4.patch, 
 YARN-128.patch


 We should resurrect 'RM Restart' which we disabled sometime during the RM 
 refactor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-97) nodemanager depends on /bin/bash

2012-11-19 Thread Radim Kolar (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-97?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radim Kolar resolved YARN-97.
-

Resolution: Not A Problem

 nodemanager depends on /bin/bash
 

 Key: YARN-97
 URL: https://issues.apache.org/jira/browse/YARN-97
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
 Environment: FreeBSD 8.2 / 64 bit
Reporter: Radim Kolar
  Labels: patch
 Attachments: bash-replace-by-sh.txt


 Currently nodemanager depends on bash shell. It should be well documented for 
 system not having bash installed by default such as FreeBSD. Because only 
 basic functionality of bash is used, probably changing bash to /bin/sh would 
 work enough.
 i found 2 cases:
 1. DefaultContainerExecutor.java creates file with /bin/bash hardcoded in 
 writeLocalWrapperScript. (this needs bash in /bin)
 2. yarn-hduser-nodemanager-ponto.amerinoc.com.log:2012-04-03 19:50:10,798 
 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: 
 launchContainer: [bash, -c, 
 /tmp/nm-local-dir/usercache/hduser/appcache/application_1333474251533_0002/container_1333474251533_0002_01_12/default_container_executor.sh]
 this created script is also launched by bash - bash anywhere in path works - 
 in freebsd it is /usr/local/bin/bash

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira