Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/10942#discussion_r51336391
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala ---
@@ -59,6 +60,138 @@ class BucketedReadSuite extends QueryTest with
SQLTestUtils with TestHiveSinglet
}
}
+ // To verify if pruning works, we compare the results before filtering
+ test("read partitioning bucketed tables with bucket pruning filters") {
+ val df = (10 until 50).map(i => (i % 5, i % 13 + 10,
i.toString)).toDF("i", "j", "k")
+
+ withTable("bucketed_table") {
+ // The number of buckets should be large enough to make sure each
bucket contains
+ // at most one bucketing key value.
+ // json does not support predicate push-down, and thus json is used
here
+ df.write
+ .format("json")
+ .partitionBy("i")
+ .bucketBy(50, "j")
+ .saveAsTable("bucketed_table")
+ for (j <- 10 until 23) {
+ // Case 1: EqualTo
+ val filter1 = hiveContext.table("bucketed_table").select("i", "j",
"k")
+ .filter($"j" === j).queryExecution.executedPlan
+ checkAnswer(
+ df.select("i", "j", "k").filter($"j" === j).sort("i", "j", "k"),
+
filter1.children.head.executeCollectPublic().sortBy(_.toString()))
--- End diff --
maybe we should make a method for this, there is a lof of similar code.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]