sunchao commented on a change in pull request #29542:
URL: https://github.com/apache/spark/pull/29542#discussion_r574167026
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
##########
@@ -199,12 +151,21 @@ public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptCont
*/
protected void initialize(String path, List<String> columns) throws
IOException {
Configuration config = new Configuration();
- config.set("spark.sql.parquet.binaryAsString", "false");
- config.set("spark.sql.parquet.int96AsTimestamp", "false");
+ config.setBoolean(SQLConf.PARQUET_BINARY_AS_STRING().key() , false);
+ config.setBoolean(SQLConf.PARQUET_INT96_AS_TIMESTAMP().key(), false);
this.file = new Path(path);
long length =
this.file.getFileSystem(config).getFileStatus(this.file).getLen();
- ParquetMetadata footer = readFooter(config, file, range(0, length));
+ ParquetReadOptions options = HadoopReadOptions
+ .builder(config)
+ .withRange(0, length)
+ .build();
+
+ ParquetMetadata footer;
+ try (ParquetFileReader reader = ParquetFileReader
Review comment:
@HyukjinKwon @srowen I've run the `FilterPushdownBenchmark` again with
this PR and I don't see much difference before and after. I've put the result
[here](https://gist.github.com/sunchao/75cc9e966108bce4353818ce1b59d200) in
case you are curious.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]