[ 
https://issues.apache.org/jira/browse/HIVE-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14202915#comment-14202915
 ] 

Thomas Friedrich commented on HIVE-7955:
----------------------------------------

The test limit_partition_metadataonly fails with 
2014-11-06 18:40:12,891 ERROR ql.Driver (SessionState.java:printError(829)) - 
FAILED: SemanticException Number of partitions scanned (=4) on table srcpart 
exceeds limit (=1). This is controlled by hive.limit.query.max.table.partition.
org.apache.hadoop.hive.ql.parse.SemanticException: Number of partitions scanned 
(=4) on table srcpart exceeds limit (=1). This is controlled by 
hive.limit.query.max.table.partition.
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.enforceScanLimits(SemanticAnalyzer.java:10358)
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10190)
        at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:221)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:419)

In the test, SemanticAnalyzer.enforceScanLimits expects only 1 partition 
ds=2008-04-08/hr=11 but gets 4 partitions:
[srcpart(ds=2008-04-08/hr=11), srcpart(ds=2008-04-08/hr=12), 
srcpart(ds=2008-04-09/hr=11), srcpart(ds=2008-04-09/hr=12)]

In the log it shows that the ParitionPruner ran, and it should have only 
retained one partition:
2014-11-07 14:18:09,147 DEBUG ppr.PartitionPruner 
(PartitionPruner.java:prune(206)) - Filter w/ compacting: ((hr = 11) and (ds = 
'2008-04-08')); filter w/o compacting: ((hr = 11) and (ds = '2008-04-08'))
2014-11-07 14:18:09,147 INFO  metastore.HiveMetaStore 
(HiveMetaStore.java:logInfo(719)) - 0: get_partitions_by_expr : db=default 
tbl=srcpart
2014-11-07 14:18:09,165 DEBUG ppr.PartitionPruner 
(PartitionPruner.java:prunePartitionNames(491)) - retained partition: 
ds=2008-04-08/hr=11

Created JIRA HIVE-8788 to track this failure.

> Investigate query failures (4)
> ------------------------------
>
>                 Key: HIVE-7955
>                 URL: https://issues.apache.org/jira/browse/HIVE-7955
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Brock Noland
>            Assignee: Thomas Friedrich
>
> I ran all q-file tests and the following failed with an exception:
> http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-SPARK-ALL-TESTS-Build/lastCompletedBuild/testReport/
> we don't necessary want to run all these tests as part of the spark tests, 
> but we should understand why they failed with an exception. This JIRA is to 
> look into these failures and document them with one of:
> * New JIRA
> * Covered under existing JIRA
> * More investigation required
> Tests:
> {noformat}
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dynpart_sort_optimization
>         12 sec  2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_schemeAuthority2 
> 0.23 sec        2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part8   
> 10 sec  2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_4
>     11 sec  2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_orc_analyze      
> 8 sec   2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_tez_join_hash    
> 0.98 sec        2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_hook_context_cs  
> 2.1 sec 2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_insert_overwrite_local_directory_1
>        3.7 sec 2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_archive_excludeHadoop20
>   27 sec  2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_9
>     8.2 sec 2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_partition_metadataonly
>      0.77 sec        2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_num_reducers2
>      7 sec   2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_bigdata  
> 0.6 sec 2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketsortoptimize_insert_6
>       6.6 sec 2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_25   
> 2.6 sec 2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dbtxnmgr_query3  
> 0.48 sec        2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_16
>    8.5 sec 2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_empty_dir_in_table
>        2.6 sec 2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_input33  1.3 sec 
> 2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_admin_almighty1
>     2.8 sec 2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_udf_context_aware 
>        0.23 sec        2
>  
> org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_view_sqlstd
>         4.1 sec 2
>  org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to