Github user dongjoon-hyun commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21302#discussion_r187745471
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
 ---
    @@ -602,6 +602,16 @@ class ParquetFilterSuite extends QueryTest with 
ParquetTest with SharedSQLContex
           }
         }
       }
    +
    +  test("SPARK-23852: Broken Parquet push-down for partially-written 
stats") {
    +    // parquet-1217.parquet contains a single column with values -1, 0, 1, 
2 and null.
    +    // The row-group statistics include null counts, but not min and max 
values, which
    +    // triggers PARQUET-1217.
    +    val df = readResourceParquetFile("test-data/parquet-1217.parquet")
    --- End diff --
    
    Since this test case assumes `spark.sql.parquet.filterPushdown=true`, let's 
use the followings.
    ```scala
    withSQLConf(SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED.key -> "true",
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to