srowen commented on a change in pull request #29542:
URL: https://github.com/apache/spark/pull/29542#discussion_r574561500
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
##########
@@ -199,12 +151,21 @@ public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptCont
*/
protected void initialize(String path, List<String> columns) throws
IOException {
Configuration config = new Configuration();
- config.set("spark.sql.parquet.binaryAsString", "false");
- config.set("spark.sql.parquet.int96AsTimestamp", "false");
+ config.setBoolean(SQLConf.PARQUET_BINARY_AS_STRING().key() , false);
+ config.setBoolean(SQLConf.PARQUET_INT96_AS_TIMESTAMP().key(), false);
this.file = new Path(path);
long length =
this.file.getFileSystem(config).getFileStatus(this.file).getLen();
- ParquetMetadata footer = readFooter(config, file, range(0, length));
+ ParquetReadOptions options = HadoopReadOptions
+ .builder(config)
+ .withRange(0, length)
+ .build();
+
+ ParquetMetadata footer;
+ try (ParquetFileReader reader = ParquetFileReader
Review comment:
I'm just eyeballing the diff, and most cases are about the same, but
there seem to be a number of cases where this brings a 10-20% improvement, like
the InSet -> InFilter tests. Seems worthwhile to commit if tests pass, and they
seem to.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]