I cranked those setting up in an attempt to solve the heap issues. Just to
verify, I restored the defaults and cycled both dfs and mapred daemons.
Still getting same error.


On 11/13/11 6:34 PM, "Eric Fiala" <e...@fiala.ca> wrote:

> Hoot, these are big numbers - some thoughts
> 1) does your machine have 1000GB to spare for each java child thread (each
> mapper + each reducer)?  mapred.child.java.opts / -Xmx1048576m
> 2) does each of your daemons need / have 10G? HADOOP_HEAPSIZE=10000
> 
> hth
> EF
>>>>> # The maximum amount of heap to use, in MB. Default is 1000.
>>>>>  export HADOOP_HEAPSIZE=10000
>>>>> <name>mapred.child.java.opts</name>
>>>>> <value>-Xmx1048576m</value>
>>>>> </property>
>>>>> 
> 

Reply via email to