[ 
https://issues.apache.org/jira/browse/HIVE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13253800#comment-13253800
 ] 

Kevin Wilfong commented on HIVE-2918:
-------------------------------------

I created a task to resolve the test failures here
https://issues.apache.org/jira/browse/HIVE-2952
                
> Hive Dynamic Partition Insert - move task not considering 
> 'hive.exec.max.dynamic.partitions' from CLI
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-2918
>                 URL: https://issues.apache.org/jira/browse/HIVE-2918
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 0.7.1, 0.8.0, 0.8.1
>         Environment: Cent OS 64 bit
>            Reporter: Bejoy KS
>            Assignee: Carl Steinbach
>         Attachments: HIVE-2918.D2703.1.patch
>
>
> Dynamic Partition insert showing an error with the number of partitions 
> created even after the default value of 'hive.exec.max.dynamic.partitions' is 
> bumped high to 2000.
> Error Message:
> "Failed with exception Number of dynamic partitions created is 1413, which is 
> more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
> at least 1413."
> These are the following properties set on hive CLI
> hive> set hive.exec.dynamic.partition=true;
> hive> set hive.exec.dynamic.partition.mode=nonstrict;
> hive> set hive.exec.max.dynamic.partitions=2000;
> hive> set hive.exec.max.dynamic.partitions.pernode=2000;
> This is the query with console error log
> hive> 
>     > INSERT OVERWRITE TABLE partn_dyn Partition (pobox)
>     > SELECT country,state,pobox FROM non_partn_dyn;
> Total MapReduce jobs = 2
> Launching Job 1 out of 2
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201204021529_0002, Tracking URL = 
> http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201204021529_0002
> Kill Command = /usr/lib/hadoop/bin/hadoop job  
> -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201204021529_0002
> 2012-04-02 16:05:28,619 Stage-1 map = 0%,  reduce = 0%
> 2012-04-02 16:05:39,701 Stage-1 map = 100%,  reduce = 0%
> 2012-04-02 16:05:50,800 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201204021529_0002
> Ended Job = 248865587, job is filtered out (removed at runtime).
> Moving data to: 
> hdfs://0.0.0.0/tmp/hive-cloudera/hive_2012-04-02_16-05-24_919_5976014408587784412/-ext-10000
> Loading data to table default.partn_dyn partition (pobox=null)
> Failed with exception Number of dynamic partitions created is 1413, which is 
> more than 1000. To solve this try to set hive.exec.max.dynamic.partitions to 
> at least 1413.
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MoveTask
> I checked the job.xml of the first map only job, there the value 
> hive.exec.max.dynamic.partitions=2000 is reflected but the move task is 
> taking the default value from hive-site.xml . If I change the value in 
> hive-site.xml then the job completes successfully. Bottom line,the property 
> 'hive.exec.max.dynamic.partitions'set on CLI is not being considered by move 
> task

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to