huaxingao commented on code in PR #36332:
URL: https://github.com/apache/spark/pull/36332#discussion_r857303695


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/csv/CSVScan.scala:
##########
@@ -70,8 +70,8 @@ case class CSVScan(
     ExprUtils.verifyColumnNameOfCorruptRecord(dataSchema, 
parsedOptions.columnNameOfCorruptRecord)
     // Don't push any filter which refers to the "virtual" column which cannot 
present in the input.
     // Such filters will be applied later on the upper layer.
-    val actualFilters =
-      
pushedFilters.filterNot(_.references.contains(parsedOptions.columnNameOfCorruptRecord))
+    val actualFilters = pushedFilters.map(_.toV1)
+      
.filterNot(_.references.contains(parsedOptions.columnNameOfCorruptRecord))

Review Comment:
   Currently `OrcFilters`, `ParquetFilters`, `JacksonParser`, `UnivocityParser` 
only take v1 filters. There are actually quite some work to refactor these to 
make them also work with v2 filters. I prefer to have separate PRs later on for 
these changes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to