HyukjinKwon commented on a change in pull request #26973: [SPARK-30323][SQL] 
Support filters pushdown in CSV datasource
URL: https://github.com/apache/spark/pull/26973#discussion_r366676457
 
 

 ##########
 File path: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/csv/UnivocityParserSuite.scala
 ##########
 @@ -267,4 +269,63 @@ class UnivocityParserSuite extends SparkFunSuite with 
SQLHelper {
     assert(convertedValue.isInstanceOf[UTF8String])
     assert(convertedValue == expected)
   }
+
+  test("skipping rows using pushdown filters") {
+    def check(
+        input: String = "1,a",
+        dataSchema: String = "i INTEGER, s STRING",
+        requiredSchema: String = "i INTEGER",
+        filters: Seq[Filter],
+        expected: Seq[InternalRow]): Unit = {
+      def getSchema(str: String): StructType = str match {
 
 Review comment:
   @MaxGekk, it's a nit but I wouldn't add this nested function. Nested 
function is discouraged in general as it makes difficult to read. How about 
just simply doing `requiredSchema = StructType.fromDDL("i INTEGER, s STRING")` 
or `requiredSchema = new StructType()` in the caller sides? I think such one or 
two line duplications are fine.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to