[ 
https://issues.apache.org/jira/browse/PIG-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheolsoo Park updated PIG-4043:
-------------------------------

    Description: 
With Hadoop 2.4, I often see Pig client fails due to OOM when there are many 
tasks (~100K) with 1GB heap size.

The heap dump (attached) shows that TaskReport[] occupies about 80% of heap 
space at the time of OOM.

The problem is that JobClient.getMap/ReduceTaskReports() returns an array of 
TaskReport objects, which can be huge if the number of task is large.

  was:
With Hadoop 2.4, I often see Pig client fails due to OOM when there are many 
tasks (~100K) with 1GB heap size.

The heap dump (attached) shows that TaskReport[] occupies more than 90% of heap 
space at the time of OOM.

The problem is that JobClient.getMap/ReduceTaskReports() returns an array of 
TaskReport objects, which can be huge if the number of task is large.


> JobClient.getMap/ReduceTaskReports() causes OOM for jobs with a large number 
> of tasks
> -------------------------------------------------------------------------------------
>
>                 Key: PIG-4043
>                 URL: https://issues.apache.org/jira/browse/PIG-4043
>             Project: Pig
>          Issue Type: Bug
>            Reporter: Cheolsoo Park
>            Assignee: Cheolsoo Park
>             Fix For: 0.14.0
>
>         Attachments: heapdump.png
>
>
> With Hadoop 2.4, I often see Pig client fails due to OOM when there are many 
> tasks (~100K) with 1GB heap size.
> The heap dump (attached) shows that TaskReport[] occupies about 80% of heap 
> space at the time of OOM.
> The problem is that JobClient.getMap/ReduceTaskReports() returns an array of 
> TaskReport objects, which can be huge if the number of task is large.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to