Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22614#discussion_r222123856
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -746,34 +746,20 @@ private[client] class Shim_v0_13 extends Shim_v0_12 {
getAllPartitionsMethod.invoke(hive,
table).asInstanceOf[JSet[Partition]]
} else {
logDebug(s"Hive metastore filter is '$filter'.")
- val tryDirectSqlConfVar =
HiveConf.ConfVars.METASTORE_TRY_DIRECT_SQL
- // We should get this config value from the metaStore. otherwise
hit SPARK-18681.
- // To be compatible with hive-0.12 and hive-0.13, In the future we
can achieve this by:
- // val tryDirectSql =
hive.getMetaConf(tryDirectSqlConfVar.varname).toBoolean
- val tryDirectSql =
hive.getMSC.getConfigValue(tryDirectSqlConfVar.varname,
- tryDirectSqlConfVar.defaultBoolVal.toString).toBoolean
try {
// Hive may throw an exception when calling this method in some
circumstances, such as
- // when filtering on a non-string partition column when the hive
config key
- // hive.metastore.try.direct.sql is false
+ // when filtering on a non-string partition column.
getPartitionsByFilterMethod.invoke(hive, table, filter)
.asInstanceOf[JArrayList[Partition]]
} catch {
- case ex: InvocationTargetException if
ex.getCause.isInstanceOf[MetaException] &&
- !tryDirectSql =>
+ case ex: InvocationTargetException if
ex.getCause.isInstanceOf[MetaException] =>
--- End diff --
Also, it's not blindly calling that API right? It was already being called
before if direct sql was disabled. In the other case, it was just throwing an
exception. So now instead of erroring out it will work, just more slowly than
expected.
Unless there's some retry at a higher layer that I'm not aware of.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]