Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13728#discussion_r67466643
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
 ---
    @@ -545,4 +545,28 @@ class ParquetFilterSuite extends QueryTest with 
ParquetTest with SharedSQLContex
           }
         }
       }
    +
    +  test("Verify SQLConf PARQUET_FILTER_PUSHDOWN_ENABLED") {
    +    import testImplicits._
    +
    +    Seq("true", "false").foreach { pushDown =>
    +      // When SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key is set to true 
and all the data types
    +      // of the table schema are AtomicType, the parquet reader uses 
vectorizedReader.
    +      // In this mode, filters will not be pushed down, no matter whether
    +      // SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED.key is true or not.
    +      withSQLConf(SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED.key -> pushDown,
    +          SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> "false") {
    +        withTempPath { dir =>
    +          val path = s"${dir.getCanonicalPath}/table1"
    +          (1 to 3).map(i => (i, i.toString)).toDF("a", 
"b").write.parquet(path)
    +          // When a filter is pushed to Parquet, Parquet can apply it to 
every row.
    --- End diff --
    
    @davies You are right. Sorry, I just simply copied this comment from the 
other test cases. Let me remove all of them. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to