[ 
https://issues.apache.org/jira/browse/PIG-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151713#comment-14151713
 ] 

Mass Dosage commented on PIG-3411:
----------------------------------

If this is still an issue you might want to try configure the split metadata 
size like so:

mapreduce.job.split.metainfo.maxsize=-1

Instead of the one recommended above.

> pig skewed join with a big table causes “Split metadata size exceeded 
> 10000000”
> -------------------------------------------------------------------------------
>
>                 Key: PIG-3411
>                 URL: https://issues.apache.org/jira/browse/PIG-3411
>             Project: Pig
>          Issue Type: Bug
>    Affects Versions: 0.10.0
>         Environment: Pig version 0.10.0-cdh3u4a
> Hadoop 0.20.2-cdh3u4a
>            Reporter: Ido Hadanny
>
> We have a pig join between a small (16M rows) distinct table and a big (6B 
> rows) skewed table. A regular join finishes in 2 hours (after some tweaking). 
> We tried using skewed and been able to improve the performance to 20 minutes.
> HOWEVER, when we try a bigger skewed table (19B rows), we get this message 
> from the SAMPLER job:
> Split metadata size exceeded 10000000. Aborting job job_201305151351_21573 
> [ScriptRunner]
> at 
> org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(SplitMetaInfoReader.java:48)
> This is reproducible every time we try using skewed, and does not happen when 
> we use the regular join.
> we tried setting mapreduce.jobtracker.split.metainfo.maxsize=-1 and we can 
> see it's there in the job.xml file, but it doesn't change anything!
> What's happening here? Is this a bug with the distribution sample created by 
> using skewed? Why doesn't it help changing the param to -1?
> also available at http://stackoverflow.com/q/17163112/574187
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to