HyukjinKwon commented on a change in pull request #26973: [SPARK-30323][SQL]
Support filters pushdown in CSV datasource
URL: https://github.com/apache/spark/pull/26973#discussion_r366152483
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/UnivocityParser.scala
##########
@@ -72,7 +95,14 @@ class UnivocityParser(
new CsvParser(parserSetting)
}
- private val row = new GenericInternalRow(requiredSchema.length)
+ // The row is used as a temporary placeholder of parsed and converted values.
+ // It is needed for applying the pushdown filters.
+ private val parsedRow = new GenericInternalRow(parsedSchema.length)
+ // Pre-allocated Seq to avoid the overhead of the seq builder.
+ private val requiredRow = Seq(new GenericInternalRow(requiredSchema.length))
Review comment:
So, per https://github.com/apache/spark/pull/26973/files#r366151938, it can
be `Option` I believe.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]