Hi all,
When I run the pi Hadoop sample I get this error:
10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp://
h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stdout
10/03/31 15:46:13 WARN mapred.JobClient: Error reading task outputhttp://
h04.ctinfra.ufpr.br:50060/tasklog?plaintext=true&taskid=attempt_201003311545_0001_r_000002_0&filter=stderr
10/03/31 15:46:20 INFO mapred.JobClient: Task Id :
attempt_201003311545_0001_m_000006_1, Status : FAILED
java.io.IOException: Task process exit with nonzero status of 134.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
May be its because the datanode can't create more threads.
ram...@lcpad:~/hadoop-0.20.2$ cat
logs/userlogs/attempt_201003311457_0001_r_000001_2/stdout
#
# A fatal error has been detected by the Java Runtime Environment:
#
# java.lang.OutOfMemoryError: Cannot create GC thread. Out of system
resources.
#
# Internal Error (gcTaskThread.cpp:38), pid=28840, tid=140010745776400
# Error: Cannot create GC thread. Out of system resources.
#
# JRE version: 6.0_17-b04
# Java VM: Java HotSpot(TM) 64-Bit Server VM (14.3-b01 mixed mode
linux-amd64 )
# An error report file with more information is saved as:
#
/var-host/tmp/hadoop-ramiro/mapred/local/taskTracker/jobcache/job_201003311457_0001/attempt_201003311457_0001_r_000001_2/work/hs_err_pid28840.log
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
I configured the limits bellow, but I'm still getting the same error.
<property>
<name>fs.inmemory.size.mb</name>
<value>100</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx128M</value>
</property>
Do you know what limit should I configure to fix it?
Thanks in Advance
Edson Ramiro