[ 
https://issues.apache.org/jira/browse/HADOOP-6035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12722982#action_12722982
 ] 

rahul k singh commented on HADOOP-6035:
---------------------------------------

2009-06-22 18:20:38,850 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up 
the system directory
2009-06-22 18:20:38,961 FATAL org.apache.hadoop.mapred.JobTracker: 
org.apache.hadoop.mapred.JobTracker$IllegalStateException: System has no 
default queue configured
        at 
org.apache.hadoop.mapred.CapacityTaskScheduler.start(CapacityTaskScheduler.java:1033)
        at 
org.apache.hadoop.mapred.JobTracker.offerService(JobTracker.java:1283)
        at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:2791)


I can see following exception in the log , in the earlier versions of capacity 
scheduler we are specifically checking for a queue Name 
"default".

Please create 1 more queue by name "default" and also add respective entries in 
capacity-scheduler.xml with proper settings , or change your existing queue 
name "q1" to "default" .

I hope this helps. 
If you say further issues , do attach the logs and respective config files.



> jobtracker stops when namenode goes out of safemode runing capacit scheduler
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-6035
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6035
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>         Environment: Fedora 10
>            Reporter: Anjali M
>            Priority: Minor
>         Attachments: capacity-scheduler.xml, 
> hadoop-hadoop-jobtracker-anjus.in.log, hadoop-site.xml
>
>
> I am facing a problem running the capacity scheduler in hadoop-0.20.0.
> The jobtracker is listing the queues when namenode is in the safemode.
> Once the namenode goes out of the safemode the jt stops working. On
> accessing jobqueue details it shows the following error.
> HTTP ERROR: 500
> INTERNAL_SERVER_ERROR
> RequestURI=/jobqueue_details.jsp
> Caused by:
> java.lang.NullPointerException
>        at 
> org.apache.hadoop.mapred.JobQueuesManager.getRunningJobQueue(JobQueuesManager.java:156)
>        at 
> org.apache.hadoop.mapred.CapacityTaskScheduler.getJobs(CapacityTaskScheduler.java:1495)
>        at 
> org.apache.hadoop.mapred.jobqueue_005fdetails_jsp._jspService(jobqueue_005fdetails_jsp.java:64)
>        at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
>        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>        at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
>        at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
>        at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>        at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
>        at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>        at 
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
>        at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>        at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>        at org.mortbay.jetty.Server.handle(Server.java:324)
>        at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
>        at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
>        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
>        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
>        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
>        at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
>        at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
> Is it because any of the configuration in capacity-scheduler.xml is incorrect?
> I tried forcing the namenode out of the safemode in bin/hadoop
> dfsadmin, but still it does not work.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to