are you using 32bit jdk for your task trackers?

If so reduce the mem setting in mapred.child.java.opts

--
Arpit

On Oct 7, 2012, at 12:29 PM, Attila Csordas <attilacsor...@gmail.com> wrote:

> <property>
>  <name>mapred.tasktracker.map.tasks.maximum</name>
>  <value>10</value>
> </property>
>
> <property>
>  <name>mapred.tasktracker.reduce.tasks.maximum</name>
>  <value>6</value>
> </property>
>
> Cheers,
> Attila
>
> On Sun, Oct 7, 2012 at 6:34 AM, Harsh J <ha...@cloudera.com> wrote:
>> Hi,
>>
>> What is your # of slots per TaskTracker? Your ulimit seems pretty
>> high. I'd set it to 1.5x times heap initially, i.e., 6291456 (6 GB)
>> and try.
>>
>> On Sun, Oct 7, 2012 at 3:50 AM, Attila Csordas <attilacsor...@gmail.com> 
>> wrote:
>>> some details to this problem:
>>>
>>> 12/10/05 12:13:27 INFO mapred.JobClient:  map 0% reduce 0%
>>> 12/10/05 12:13:40 INFO mapred.JobClient: Task Id :
>>> attempt_201210051158_0001_m_000002_0, Status : FAILED
>>> java.lang.Throwable: Child Error
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of 
>>> 134.
>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> attempt_201210051158_0001_m_000002_0: #
>>> attempt_201210051158_0001_m_000002_0: # There is insufficient memory
>>> for the Java Runtime Environment to continue.
>>> attempt_201210051158_0001_m_000002_0: # pthread_getattr_np
>>>
>>> in mapred-site.xml the following memory settings were set after a
>>> couple trials to get rid of the problem this way:
>>>
>>> <property>
>>> <name>mapred.child.java.opts</name>
>>> <value>-server -Xmx4096M -Djava.net.preferIPv4Stack=true</value>
>>> </property>
>>>
>>> <property>
>>> <name>mapred.child.ulimit</name>
>>> <value>16777216</value>
>>> </property>
>>>
>>> Cheers,
>>> Attila
>>>
>>>
>>>
>>> On Fri, Oct 5, 2012 at 10:50 AM, Steve Lewis <lordjoe2...@gmail.com> wrote:
>>>> [We get 'There is insufficient memory for the Java Runtime Environment to
>>>> continue.'
>>>> any time we run any job including the most trivial word count process. It 
>>>> is
>>>> true I am generating a jar for a larger job but only running a version of
>>>> wordcount that worked well under 0.2
>>>> Any bright ideas???
>>>> This is a new 1.03 installation and nothing is known to work
>>>>
>>>> Steven M. Lewis PhD
>>>> 4221 105th Ave NE
>>>> Kirkland, WA 98033
>>>> cell 206-384-1340
>>>> skype lordjoe_com
>>
>>
>>
>> --
>> Harsh J

Reply via email to