If the load on reducers is an issue then perhaps increasing the number of 
reducers would help. I'm pretty new to Pig myself but thought I'd share a 
suggestion.

Pankaj
On Jun 18, 2012, at 8:30 AM, James Newhaven wrote:

> Hi,
> 
> I am executing a Pig script on Elastic MapReduce. It runs fine over one
> day's worth of data, but when I increase my dataset size to 30 days, the
> reducers have started failing with the following error:
> 
> java.lang.Throwable: Child Error
> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: *Task process exit with nonzero status of
> 137.*
> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258
> 
> I can't find any status code lookup tables in the documentation, so I can't
> be certain what the root cause of the error is.
> 
> I suspect it is an out of memory problem on the reducer nodes, but I can't
> be certain. Can anyone help?
> 
> Thanks,
> James

Reply via email to