Github user kmanamcheri commented on the issue:
https://github.com/apache/spark/pull/22614
> Let us add a conf to control it? Failing fast is better than hanging. If
users want to get all partitions, they can change the conf by themselves.
@gatorsmile We already have a config option
"spark.sql.hive.metastorePartitionPruning". If that is set to false, we will
never push down the partitions to HMS. I will add
"spark.sql.hive.metastorePartitionPruningFallback" which in addition to the
previous one controls the fallback behavior. Irrespective of the value of Hive
direct SQL, if we enable the pruning fallback, we will catch the exception and
fallback to fetch all partitions. Does this sound like a reasonable compromise
@mallman ?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]