Hi Pig List,

After tinkering with the mapred.child.java.opts property -Xmx200m setting and 
trying 512, 1024, 2048 values (e.g. Xmx512m ), none of these seem to solve the 
problem.  Map tasks complete 100% but Reduce tasks are killed never proceeding 
past 98.88% complete.  I am setting the config changes in hadoop-site.xml.

Can you provide guidance as to which parameters are relevant here and what 
settings to use?

Thanks for your help,

Avram



-----Original Message-----
From: Avram Aelony 
Sent: Thursday, March 19, 2009 2:35 PM
To: [email protected]
Subject: RE: low memory handler?


Thank you, Olga.
Will reference 
http://hadoop.apache.org/core/docs/current/cluster_setup.html#Configuration+Files
 and try to specify more memory.

Regards,
Avram



-----Original Message-----
From: Olga Natkovich [mailto:[email protected]] 
Sent: Thursday, March 19, 2009 2:21 PM
To: [email protected]
Subject: RE: low memory handler?

It looks like your tasks are configured to use 200 MB. This is usually
not sufficient for large data processing. In general, you need at least
500 MB, 1 GB recommended and if you have more memory on your machine,
configuring more can further help your query execution. The amount of
course depends on how much memory you have on your machines and how many
map and reduce slots they have.

Olga

> -----Original Message-----
> From: Avram Aelony [mailto:[email protected]] 
> Sent: Thursday, March 19, 2009 2:14 PM
> To: [email protected]
> Subject: low memory handler?
> 
> Hello Pig List,
> 
> I am now taking my (tested) pig script that will produce 
> distinct counts and trying to apply it to real data.  I am 
> finding however, that though the map stage completes (100%), 
> the reduce stage hangs at 97.77% and then fails to produce output.
> 
> It appears that the syslog contains notices of "threshold 
> exceeded" before the ultimate failure...
> 
> 2009-03-19 10:54:10,525 INFO 
> org.apache.pig.impl.util.SpillableMemoryManager: low memory 
> handler called (Usage threshold exceeded) init = 
> 1441792(1408K) used = 131343896(128265K) committed = 
> 186449920(182080K) max = 186449920(182080K)
> 2009-03-19 10:54:18,150 INFO 
> org.apache.pig.impl.util.SpillableMemoryManager: low memory 
> handler called (Usage threshold exceeded) init = 
> 1441792(1408K) used = 131311248(128233K) committed = 
> 186449920(182080K) max = 186449920(182080K)
> 2009-03-19 10:54:25,833 INFO 
> org.apache.pig.impl.util.SpillableMemoryManager: low memory 
> handler called (Usage threshold exceeded) init = 
> 1441792(1408K) used = 133580568(130449K) committed = 
> 186449920(182080K) max = 186449920(182080K)
> 
> ... 
> 
> Does this mean that the Hadoop cluster requires tuning?
> 
> How can I avoid this memory error?
> 
> 
> 
> Regards,
> Avram
> 
> 

Reply via email to