Hi Geoff,
The heap size configuration for your child processes is controlled by the
mapred.child.java.opts configuration parameter, which is separate from the
HADOOP_HEAPSIZE setting. HADOOP_HEAPSIZE is used to control how much heap
the daemons themselves get, whereas mapred.child.java.opts controls how much
your individual task JVMs get.

-Todd

On Mon, Oct 19, 2009 at 3:33 PM, Matrangola, Geoffrey <
[email protected]> wrote:

>
> Hello,
>
> I have a small library that I created that runs fine with a standard Java
> Main (not on map-reduce) with plenty of heap (i.e. -Xmx2G).  My algorithm is
> written to take advantage of machines with large amounts (2+ GB) of RAM and
> uses the heap space to work efficiently. I changed the HADOOP_HEAPSIZE to
> 2000 in hadoop-env.sh but still get ...
>
> 9/10/19 18:23:13 INFO mapred.JobClient: Task Id:
> attempt_200910191815_002_m_000000_2, Status : FAILED
> Error: Java heap space
>
> When I run it under Map Reduce.  Does something else need to be
> configured?  I don't see anything in Configuration or Job that looks like it
> controls heap size.  Is there a hard or practice limit on heap size for a
> Map Job?
>
> Thanks,
> Geoff
>

Reply via email to