Github user ericl commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15673#discussion_r85856641
  
    --- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
    @@ -585,7 +586,31 @@ private[client] class Shim_v0_13 extends Shim_v0_12 {
             getAllPartitionsMethod.invoke(hive, 
table).asInstanceOf[JSet[Partition]]
           } else {
             logDebug(s"Hive metastore filter is '$filter'.")
    -        getPartitionsByFilterMethod.invoke(hive, table, 
filter).asInstanceOf[JArrayList[Partition]]
    +        val tryDirectSqlConfVar = 
HiveConf.ConfVars.METASTORE_TRY_DIRECT_SQL
    +        val tryDirectSql =
    +          hive.getConf.getBoolean(tryDirectSqlConfVar.varname, 
tryDirectSqlConfVar.defaultBoolVal)
    +        try {
    +          // Hive may throw an exception when calling this method in some 
circumstances, such as
    +          // when filtering on a non-string partition column when the hive 
config key
    +          // hive.metastore.try.direct.sql is false
    +          getPartitionsByFilterMethod.invoke(hive, table, filter)
    +            .asInstanceOf[JArrayList[Partition]]
    +        } catch {
    +          case ex: InvocationTargetException if 
ex.getCause.isInstanceOf[MetaException] &&
    +              !tryDirectSql =>
    +            logWarning("Caught Hive MetaException attempting to get 
partition metadata by " +
    +              "filter from Hive. Falling back to fetching all partition 
metadata, which will " +
    +              "degrade performance. Consider modifying your Hive metastore 
configuration to " +
    +              s"set ${tryDirectSqlConfVar.varname} to true.", ex)
    +            // HiveShim clients are expected to handle a superset of the 
requested partitions
    +            getAllPartitionsMethod.invoke(hive, 
table).asInstanceOf[JSet[Partition]]
    +          case ex: InvocationTargetException if 
ex.getCause.isInstanceOf[MetaException] &&
    +              tryDirectSql =>
    +            throw new RuntimeException("Caught Hive MetaException 
attempting to get partition " +
    +              "metadata by filter from Hive. Set the Spark configuration 
setting " +
    --- End diff --
    
    You probably want word it to suggest disabling partition management as a 
workaround only.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to