srowen commented on a change in pull request #28484:
URL: https://github.com/apache/spark/pull/28484#discussion_r422679366
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
##########
@@ -144,13 +113,16 @@ public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptCont
String sparkRequestedSchemaString =
configuration.get(ParquetReadSupport$.MODULE$.SPARK_ROW_REQUESTED_SCHEMA());
this.sparkSchema =
StructType$.MODULE$.fromString(sparkRequestedSchemaString);
- this.reader = new ParquetFileReader(
- configuration, footer.getFileMetaData(), file, blocks,
requestedSchema.getColumns());
+ this.reader = new ParquetFileReader(HadoopInputFile.fromPath(file,
configuration),
+ HadoopReadOptions.builder(configuration).build());
Review comment:
I think the point is, don't change this until Parquet 1.11 is required.
This should be part of that change. Are there any changes here that definitely
work with the version of Parquet that Spark currently uses?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]