Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10942#discussion_r51347234
  
    --- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala ---
    @@ -59,6 +61,141 @@ class BucketedReadSuite extends QueryTest with 
SQLTestUtils with TestHiveSinglet
         }
       }
     
    +  // To verify if pruning works, we compare the results before filtering
    +  private def checkPrunedAnswers(
    +      sourceDataFrame: DataFrame,
    +      filterCondition: Column,
    +      expectedAnswer: DataFrame): Unit = {
    +    val filter = 
sourceDataFrame.filter(filterCondition).queryExecution.executedPlan
    +    assert(
    +      filter.isInstanceOf[execution.Filter] ||
    +      (filter.isInstanceOf[WholeStageCodegen] &&
    +        
filter.asInstanceOf[WholeStageCodegen].plan.isInstanceOf[execution.Filter]))
    +    checkAnswer(
    +      expectedAnswer.orderBy(expectedAnswer.logicalPlan.output.map(attr => 
Column(attr)) : _*),
    +      filter.children.head.executeCollectPublic().sortBy(_.toString()))
    +  }
    +
    +  test("read partitioning bucketed tables with bucket pruning filters") {
    +    val df = (10 until 50).map(i => (i % 5, i % 13 + 10, 
i.toString)).toDF("i", "j", "k")
    +
    +    withTable("bucketed_table") {
    +      // The number of buckets should be large enough to make sure each 
bucket contains
    +      // at most one bucketing key value.
    +      // json does not support predicate push-down, and thus json is used 
here
    --- End diff --
    
    Bucketing pruning can avoid scanning many useless bucket files. In each 
bucket file, it could have many different values. Row filtering in Parquet is a 
really great feature for efficiently scanning a given bucket. We need both for 
achieving the best performance.
    
    Let me try to answer why record filtering in Parquet is not perfect to 
resolve all the issues:
      - The current way is very limited. To filter row groups, it is based on 
the min / max value in the row group. That means, it might scan many useless 
row groups. 
      - It is not free. It still needs to scan metadata to prune row groups. 
      - Parquet team is trying to improve it by adding more advanced statistics 
into the metadata (e.g., bloom filters in PARQUET-41 and dictionary in 
PARQUET-384). Also, there still exist a few limits (e.g., PARQUET-295).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to