[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13250991#comment-13250991
 ] 

Thomas Graves commented on MAPREDUCE-4003:
------------------------------------------

+1 looks good to me. Yes it isn't perfect if you have jvm reuse on, 
unfortunately there isn't a good way to see if jvm reuse is enabled in the 
TaskLog code, but this is a big improvement.

In the case mentioned above with invalid jvm args the client now displays the 
message below and you can see the same on the web ui for the job setup task.

12/04/10 19:25:55 INFO mapred.JobClient: Task Id : 
attempt_201204101923_0001_m_000001_1, Status : FAILED
java.lang.Throwable: Child Error
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

attempt_201204101923_0001_m_000001_1: Invalid maximum heap size: -Xmx10g
attempt_201204101923_0001_m_000001_1: The specified size exceeds the maximum 
representable size.
attempt_201204101923_0001_m_000001_1: Could not create the Java virtual machine.
                
> log.index (No such file or directory) AND Task process exit with nonzero 
> status of 126
> --------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-4003
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4003
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: task-controller, tasktracker
>    Affects Versions: 0.20.205.0, 1.0.1
>         Environment: hadoop version -------Hadoop 0.20.2-cdh3u3
> uname -a: Linux xxxx 2.6.18-194.17.4.0.1.el5PAE #1 SMP Tue Oct 26 20:15:18 
> EDT 2010 i686 i686 i386 GNU/Linux
> core-site.xml:<configuration>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://xxxxx:8020</value>
> </property>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/home/hadoop/tmp20/</value>
>  </property>
> </configuration>
> mapred-site.xml:
> <configuration>
>     <property>
> <name>mapred.job.tracker</name>
> <value>192.168.9.60:9001</value>
> </property>
> <property>  <name>mapred.local.dir</name>  
> <value>/var/tmp/mapred/local</value> </property>
> <property>  <name>mapred.system.dir</name>  <value>/mapred/system</value> 
> </property>
> </configuration>
>            Reporter: toughman
>            Assignee: Koji Noguchi
>         Attachments: mapreduce-4003-1.patch
>
>
> hello,I have dwelled on this hadoop(cdhu3) problem for 2 days,I have tried 
> every google method.This is the issue: when ran hadoop example "wordcount" 
> ,the tasktracker's log in one slave node presented such errors
>  1.WARN org.apache.hadoop.mapred.DefaultTaskController: Task wrapper stderr: 
> bash: 
> /var/tmp/mapred/local/ttprivate/taskTracker/hdfs/jobcache/job_201203131751_0003/attempt_201203131751_0003_m_000006_0/taskjvm.sh:
>  Permission denied
> 2.WARN org.apache.hadoop.mapred.TaskRunner: 
> attempt_201203131751_0003_m_000006_0 : Child Error java.io.IOException: Task 
> process exit with nonzero status of 126.
> 3.WARN org.apache.hadoop.mapred.TaskLog: Failed to retrieve stdout log for 
> task: attempt_201203131751_0003_m_000003_0 java.io.FileNotFoundException: 
> /usr/lib/hadoop-0.20/logs/userlogs/job_201203131751_0003/attempt_201203131751_0003_m_000003_0/log.index
>  (No such file or directory)
> I could not find similar issues in google,just got some posts seem a little 
> relevant ,which suggest: A. the ulimit of hadoop user----but my ulimit is set 
> large enough for this bundled example;B. the memory used by jvm,but my jvm 
> only use Xmx200m,too small to exceed the limit of my machine ;C.the privilege 
> of the mapred.local.dir and logs dir----I set them by "chmod 777";D .the disk 
> space is full----there are enough space for hadoop in my log directory and 
> mapred.local.dir.
> Thanks for you all,I am really at my wit's end,I have spend days on it. I 
> really appreciate any light!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to