MaxGekk commented on a change in pull request #26973: [SPARK-30323][SQL]
Support filters pushdown in CSV datasource
URL: https://github.com/apache/spark/pull/26973#discussion_r365786665
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
##########
@@ -2204,4 +2204,37 @@ class CSVSuite extends QueryTest with
SharedSparkSession with TestCsvData {
checkAnswer(resultDF, Row("a", 2, "e", "c"))
}
}
+
+ test("filters push down") {
+ Seq(true, false).foreach { multiLine =>
Review comment:
Lines in CSV cannot be spitted, so, input should be the same. The difference
is how do we read the file - as whole or by lines. But you are right, there
should be not difference for the changes since I touched only value conversions.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]