Arun C Murthy-2 wrote:
>
>
> On Feb 24, 2009, at 4:03 PM, bzheng wrote:
>>
>
>> 2009-02-23 14:27:50,902 INFO org.apache.hadoop.mapred.TaskTracker:
>> java.lang.OutOfMemoryError: Java heap space
>>
>
> That tells that that your TaskTracker is running out of memory, not
> your reduce tasks.
>
> I think you are hitting http://issues.apache.org/jira/browse/
> HADOOP-4906.
>
> What version of hadoop are you running?
>
> Arun
>
>
>
I'm using 0.18.2. We figured that gz may not be the root problem when we
ran a big job not involving any gz files, after about 1.5 hours, we got the
same out of memory problem. One interesting thing though, if we do use gz
files, the out of memory issues occurs in a few minutes.
--
View this message in context:
http://www.nabble.com/OutOfMemory-error-processing-large-amounts-of-gz-files-tp22193552p22231249.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.