[ 
https://issues.apache.org/jira/browse/HIVE-3387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13447555#comment-13447555
 ] 

Carl Steinbach commented on HIVE-3387:
--------------------------------------

@Navis: Please attach a copy of the patch to this ticket. Thanks.
                
> meta data file size exceeds limit
> ---------------------------------
>
>                 Key: HIVE-3387
>                 URL: https://issues.apache.org/jira/browse/HIVE-3387
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.7.1
>            Reporter: Alexander Alten-Lorenz
>            Assignee: Navis
>             Fix For: 0.9.1
>
>
> The cause is certainly that we use an array list instead of a set structure 
> in the split locations API. Looks like a bug in Hive's CombineFileInputFormat.
> Reproduce:
> Set mapreduce.jobtracker.split.metainfo.maxsize=100000000 when submitting the 
> Hive query. Run a big hive query that write data into a partitioned table. 
> Due to the large number of splits, you encounter an exception on the job 
> submitted to Hadoop and the exception said:
> meta data size exceeds 100000000.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to