Hello,

I have a small library that I created that runs fine with a standard Java Main 
(not on map-reduce) with plenty of heap (i.e. -Xmx2G).  My algorithm is written 
to take advantage of machines with large amounts (2+ GB) of RAM and uses the 
heap space to work efficiently. I changed the HADOOP_HEAPSIZE to 2000 in 
hadoop-env.sh but still get ...

9/10/19 18:23:13 INFO mapred.JobClient: Task Id: 
attempt_200910191815_002_m_000000_2, Status : FAILED
Error: Java heap space

When I run it under Map Reduce.  Does something else need to be configured?  I 
don't see anything in Configuration or Job that looks like it controls heap 
size.  Is there a hard or practice limit on heap size for a Map Job?

Thanks,
Geoff

Reply via email to