Did this job ever run successfully for you? With 200m heap size?

Seems like your maps are failing. Can you paste your settings for the following:
 - io.sort.factor
 - io.sort.mb
 - mapreduce.map.sort.spill.percent

Thanks,
+Vinod

On Oct 21, 2012, at 6:18 AM, Subash D'Souza wrote:

> I'm running CDH 4 on  a 4 node cluster each with 96 G of RAM. Up until last 
> week the cluster was running until there was an error in the name node log 
> file and I had to reformat it put the data back 
> 
> Now when I run hive on YARN. I keep getting a Java heap space error. Based on 
> the research I did. I upped the my mapred.child.java.opts first from 200m to 
> 400 m to 800m and I still have the same issue. It seems to fail near the 100% 
> mapper mark
> 
> I checked the log files and the only thing that it does output is java heap 
> space error. Nothing more.
> 
> Any help would be appreciated.
> 
> Thanks
> Subash
> 

Reply via email to