Repository: spark
Updated Branches:
  refs/heads/master 59cccbda4 -> d2d438d1d


[SPARK-18167][SQL] Add debug code for SQLQuerySuite flakiness when metastore 
partition pruning is enabled

## What changes were proposed in this pull request?

org.apache.spark.sql.hive.execution.SQLQuerySuite is flaking when hive 
partition pruning is enabled.
Based on the stack traces, it seems to be an old issue where Hive fails to cast 
a numeric partition column ("Invalid character string format for type 
DECIMAL"). There are two possibilities here: either we are somehow corrupting 
the partition table to have non-decimal values in that column, or there is a 
transient issue with Derby.

This PR logs the result of the retry when this exception is encountered, so we 
can confirm what is going on.

## How was this patch tested?

n/a

cc yhuai

Author: Eric Liang <e...@databricks.com>

Closes #15676 from ericl/spark-18167.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d2d438d1
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d2d438d1
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d2d438d1

Branch: refs/heads/master
Commit: d2d438d1d549628a0183e468ed11d6e85b5d6061
Parents: 59cccbd
Author: Eric Liang <e...@databricks.com>
Authored: Sat Oct 29 06:49:57 2016 +0200
Committer: Reynold Xin <r...@databricks.com>
Committed: Sat Oct 29 06:49:57 2016 +0200

----------------------------------------------------------------------
 .../org/apache/spark/sql/hive/client/HiveShim.scala  | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/d2d438d1/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala
----------------------------------------------------------------------
diff --git 
a/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala 
b/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala
index 3238770..4bbbd66 100644
--- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala
+++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala
@@ -24,6 +24,7 @@ import java.util.{ArrayList => JArrayList, List => JList, Map 
=> JMap, Set => JS
 import java.util.concurrent.TimeUnit
 
 import scala.collection.JavaConverters._
+import scala.util.Try
 import scala.util.control.NonFatal
 
 import org.apache.hadoop.fs.{FileSystem, Path}
@@ -585,7 +586,19 @@ private[client] class Shim_v0_13 extends Shim_v0_12 {
         getAllPartitionsMethod.invoke(hive, 
table).asInstanceOf[JSet[Partition]]
       } else {
         logDebug(s"Hive metastore filter is '$filter'.")
-        getPartitionsByFilterMethod.invoke(hive, table, 
filter).asInstanceOf[JArrayList[Partition]]
+        try {
+          getPartitionsByFilterMethod.invoke(hive, table, filter)
+            .asInstanceOf[JArrayList[Partition]]
+        } catch {
+          case e: InvocationTargetException =>
+            // SPARK-18167 retry to investigate the flaky test. This should be 
reverted before
+            // the release is cut.
+            val retry = Try(getPartitionsByFilterMethod.invoke(hive, table, 
filter))
+            val full = Try(getAllPartitionsMethod.invoke(hive, table))
+            logError("getPartitionsByFilter failed, retry success = " + 
retry.isSuccess)
+            logError("getPartitionsByFilter failed, full fetch success = " + 
full.isSuccess)
+            throw e
+        }
       }
 
     partitions.asScala.toSeq


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to