[ 
https://issues.apache.org/jira/browse/HADOOP-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13141148#comment-13141148
 ] 

Tom Wilcox commented on HADOOP-6958:
------------------------------------

Bit more log in case it helps:

2011-11-01 13:09:55,298 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:09:58,299 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:04,301 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:09,349 WARN org.apache.hadoop.mapred.TaskTracker: 
getMapOutput(attempt_201110312140_0003_m_000010_0,0) failed :
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
taskTracker/jobcache/job_201110312140_0003/attempt_201110312140_0003_m_000010_0/output/file.out.index
 in any of the configured local directories
        at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)
        at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
        at 
org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2887)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
        at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
        at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
        at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
        at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
        at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
        at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
        at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
        at org.mortbay.jetty.Server.handle(Server.java:324)
        at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
        at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
        at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
        at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)

2011-11-01 13:10:09,350 WARN org.apache.hadoop.mapred.TaskTracker: Unknown 
child with bad map output: attempt_201110312140_0003_m_000010_0. Ignored.
2011-11-01 13:10:09,354 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: 
src: 127.0.0.1:50060, dest: 127.0.0.1:38341, bytes: 0, op: MAPRED_SHUFFLE, 
cliID: attempt_201110312140_0003_m_000010_0
2011-11-01 13:10:09,354 WARN org.mortbay.log: /mapOutput: 
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
taskTracker/jobcache/job_201110312140_0003/attempt_201110312140_0003_m_000010_0/output/file.out.index
 in any of the configured local directories
2011-11-01 13:10:10,303 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:13,304 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:19,306 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:25,308 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:28,309 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:34,312 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:40,314 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:43,315 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:49,317 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:55,319 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:10:58,320 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:11:04,322 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:11:10,326 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:11:13,327 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:11:19,329 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:11:25,331 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) > 
2011-11-01 13:11:28,332 INFO org.apache.hadoop.mapred.TaskTracker: 
attempt_201110312140_0003_r_000000_1 0.051282056% reduce > copy (2 of 13 at 
0.00 MB/s) >
                
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
> taskTracker/jobcache
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-6958
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6958
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.20.2
>         Environment: linux
> jdk1.6.0_20
> hadoop 0.20.2
>            Reporter: mazhiyong
>             Fix For: 0.20.2
>
>
> hello,
>   I am using hadoop-0.20.2 and hadoop semi-cluster run in a server and the 
> datas only 800M .
>   The problem is when the hadoop running a period of time (more than 1 
> hours),it not work. I am look up the log and find the exception: "INFO 
> org.apache.hadoop.mapred.TaskTracker: 
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
> taskTracker/jobcache/job_201009161411_0368/attempt_201009161411_0368_m_000002_0/output/file.out
>  in any of the configured local directories"
>       I googled many blogs and web pages but I could neither understand why 
> this happens nor found a solution to this. What does that error message mean 
> and how can avoid it, any suggestions?
>       I've confused the problem for a week already, Please sharing if you 
> know what could be causing this, Thinks in advance!
> Configuration File:
> <!--hadoop-site.xml-->        
>       <configuration>
>    <property>
>      <name>mapred.child.tmp</name>
>      <value>/data/hadoop-tmp</value>
>    </property>
>    <property>
>      <name>hadoop.tmp.dir</name>
>      <value>/data/hadoop-tmp</value>
>    </property>
>    <property>
>      <name>mapred.local.dir</name>
>      <value>/data/hadoop-tmp</value>
>    </property>
>  </configuration>
>       
> <!--core-site.xml-->
>       <configuration>
>    <property>
>     <name>fs.default.name</name>
>     <value>hdfs://10.0.0.8:8020</value>
>    </property>
>   </configuration>
>  
> <!--mapred-site.xml--> 
>   <configuration>
>    <property>
>      <name>mapred.job.tracker</name>
>      <value>10.0.0.8:8021</value>
>    </property>
>  </configuration>
>   
> <!--hdfs-site.xml-->
>   <configuration>
>    <property>
>     <name>dfs.name.dir</name>
>     <value>/data/name</value>
>    </property>
>    <property>
>     <name>dfs.data.dir</name>
>     <value>/data/data</value>  
>    </property>
>    <property>
>     <name>dfs.replication</name>
>     <value>1</value>
>    </property>
>   </configuration>
> ERROR Logs:
> INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): 
> attempt_201009161411_0368_r_000000_0 task's state:UNASSIGNED
> INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : 
> attempt_201009161411_0368_r_000000_0
> INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free 
> slots : 2 and trying to launch attempt_201009161411_0368_r_000000_0
> INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: 
> jvm_201009161411_0368_r_1871094354
> INFO org.apache.hadoop.mapred.JvmManager: JVM Runner 
> jvm_201009161411_0368_r_1871094354 spawned.
> INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: 
> jvm_201009161411_0368_r_1871094354 given task: 
> attempt_201009161411_0368_r_000000_0
> INFO org.apache.hadoop.mapred.TaskTracker: Sent out 381650 bytes for reduce: 
> 0 from map: attempt_201009161411_0368_m_000000_0 given 381650/381646
> INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 10.0.0.8:50060, 
> dest: 10.0.0.8:58884, bytes: 381650, op: MAPRED_SHUFFLE, cliID: 
> attempt_201009161411_0368_m_000000_0
> INFO org.apache.hadoop.mapred.TaskTracker: Sent out 384812 bytes for reduce: 
> 0 from map: attempt_201009161411_0368_m_000001_0 given 384812/384808
> INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 10.0.0.8:50060, 
> dest: 10.0.0.8:58884, bytes: 384812, op: MAPRED_SHUFFLE, cliID: 
> attempt_201009161411_0368_m_000001_0
> INFO org.apache.hadoop.mapred.TaskTracker: 
> attempt_201009161411_0368_r_000000_0 0.16666667% reduce > copy (1 of 2 at 
> 0.06 MB/s) > 
> INFO org.apache.hadoop.mapred.TaskTracker: 
> attempt_201009161411_0368_r_000000_0 0.16666667% reduce > copy (1 of 2 at 
> 0.06 MB/s) > 
> INFO org.apache.hadoop.mapred.TaskTracker: 
> attempt_201009161411_0368_r_000000_0 0.16666667% reduce > copy (1 of 2 at 
> 0.06 MB/s) > 
> INFO org.apache.hadoop.mapred.TaskTracker: Task 
> attempt_201009161411_0368_r_000000_0 is in commit-pending, task 
> state:COMMIT_PENDING
> INFO org.apache.hadoop.mapred.TaskTracker: 
> attempt_201009161411_0368_r_000000_0 0.16666667% reduce > copy (1 of 2 at 
> 0.06 MB/s) > 
> INFO org.apache.hadoop.mapred.TaskTracker: Received commit task action for 
> attempt_201009161411_0368_r_000000_0
> INFO org.apache.hadoop.mapred.TaskTracker: 
> attempt_201009161411_0368_r_000000_0 1.0% reduce > reduce
> INFO org.apache.hadoop.mapred.TaskTracker: Task 
> attempt_201009161411_0368_r_000000_0 is done.
> INFO org.apache.hadoop.mapred.TaskTracker: reported output size for 
> attempt_201009161411_0368_r_000000_0  was 0
> INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 
> 2
> INFO org.apache.hadoop.mapred.JvmManager: JVM : 
> jvm_201009161411_0368_r_1871094354 exited. Number of tasks it ran: 1
> INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): 
> attempt_201009161411_0368_m_000002_0 task's state:UNASSIGNED
> INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : 
> attempt_201009161411_0368_m_000002_0
> INFO org.apache.hadoop.mapred.TaskTracker: Received KillTaskAction for task: 
> attempt_201009161411_0368_r_000000_0
> INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free 
> slots : 2 and trying to launch attempt_201009161411_0368_m_000002_0
> INFO org.apache.hadoop.mapred.TaskTracker: About to purge task: 
> attempt_201009161411_0368_r_000000_0
> INFO org.apache.hadoop.mapred.TaskRunner: 
> attempt_201009161411_0368_r_000000_0 done; removing files.
> INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: 
> jvm_201009161411_0368_m_2026394863
> INFO org.apache.hadoop.mapred.JvmManager: JVM Runner 
> jvm_201009161411_0368_m_2026394863 spawned.
> INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: 
> jvm_201009161411_0368_m_2026394863 given task: 
> attempt_201009161411_0368_m_000002_0
> INFO org.apache.hadoop.mapred.TaskTracker: 
> attempt_201009161411_0368_m_000002_0 0.0% 
> INFO org.apache.hadoop.mapred.TaskTracker: 
> attempt_201009161411_0368_m_000002_0 0.0% cleanup
> INFO org.apache.hadoop.mapred.TaskTracker: Task 
> attempt_201009161411_0368_m_000002_0 is done.
> INFO org.apache.hadoop.mapred.TaskTracker: reported output size for 
> attempt_201009161411_0368_m_000002_0  was 0
> INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 
> 2
> INFO org.apache.hadoop.mapred.JvmManager: JVM : 
> jvm_201009161411_0368_m_2026394863 exited. Number of tasks it ran: 1
> INFO org.apache.hadoop.mapred.TaskTracker: 
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
> taskTracker/jobcache/job_201009161411_0368/attempt_201009161411_0368_m_000002_0/output/file.out
>  in any of the configured local directories
> INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: 
> job_201009161411_0368
> INFO org.apache.hadoop.mapred.TaskRunner: 
> attempt_201009161411_0368_m_000000_0 done; removing files.
> INFO org.apache.hadoop.mapred.TaskRunner: 
> attempt_201009161411_0368_m_000002_0 done; removing files.
> INFO org.apache.hadoop.mapred.IndexCache: Map ID 
> attempt_201009161411_0368_m_000002_0 not found in cache
> INFO org.apache.hadoop.mapred.TaskRunner: 
> attempt_201009161411_0368_m_000001_0 done; removing files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to