Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/13701#discussion_r73774422
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
---
@@ -199,6 +209,19 @@ private[sql] case class FileSourceScanExec(
options = relation.options,
hadoopConf =
relation.sparkSession.sessionState.newHadoopConfWithOptions(relation.options))
+ (file: PartitionedFile) => {
+ val iter = func(file)
+ // Only for test purpose.
+ // Once the vectorized Parquet reader is initialized in the above
method, we can read its
+ // variable numRowGroups.
+ if (fileFormat != null) {
--- End diff --
hmm, VectorizedParquetRecordReader is not exposed to outside of
ParquetFileFormat. It is wrapped in a returned anonymous function that takes a
PartitionedFile and read data from it. In order to pass it to
VectorizedParquetRecordReader, we might need to change current API of
FileFormat. Is it worth doing that for this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]