[ 
https://issues.apache.org/jira/browse/SPARK-17992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15586892#comment-15586892
 ] 

Michael Allman commented on SPARK-17992:
----------------------------------------

cc [~ekhliang] [~cloud_fan]

> HiveClient.getPartitionsByFilter throws an exception for some unsupported 
> filters when hive.metastore.try.direct.sql=false
> --------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17992
>                 URL: https://issues.apache.org/jira/browse/SPARK-17992
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: Michael Allman
>
> We recently added (and enabled by default) table partition pruning for 
> partitioned Hive tables converted to using {{TableFileCatalog}}. When the 
> Hive configuration option {{hive.metastore.try.direct.sql}} is set to 
> {{false}}, Hive will throw an exception for unsupported filter expressions. 
> For example, attempting to filter on an integer partition column will throw a 
> {{org.apache.hadoop.hive.metastore.api.MetaException}}.
> I discovered this behavior because VideoAmp uses the CDH version of Hive with 
> a Postgresql metastore DB. In this configuration, CDH sets 
> {{hive.metastore.try.direct.sql}} to {{false}} by default, and queries that 
> filter on a non-string partition column will fail. That would be a rather 
> rude surprise for these Spark 2.1 users...
> I'm not sure exactly what behavior we should expect, but I suggest that 
> {{HiveClientImpl.getPartitionsByFilter}} catch this metastore exception and 
> return all partitions instead. This is what Spark does for Hive 0.12 users, 
> which does not support this feature at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to