sunchao commented on a change in pull request #29542:
URL: https://github.com/apache/spark/pull/29542#discussion_r568255088
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
##########
@@ -92,67 +88,23 @@
public void initialize(InputSplit inputSplit, TaskAttemptContext
taskAttemptContext)
throws IOException, InterruptedException {
Configuration configuration = taskAttemptContext.getConfiguration();
- ParquetInputSplit split = (ParquetInputSplit)inputSplit;
+ FileSplit split = (FileSplit) inputSplit;
this.file = split.getPath();
- long[] rowGroupOffsets = split.getRowGroupOffsets();
-
- ParquetMetadata footer;
- List<BlockMetaData> blocks;
- // if task.side.metadata is set, rowGroupOffsets is null
- if (rowGroupOffsets == null) {
- // then we need to apply the predicate push down filter
- footer = readFooter(configuration, file, range(split.getStart(),
split.getEnd()));
- MessageType fileSchema = footer.getFileMetaData().getSchema();
- FilterCompat.Filter filter = getFilter(configuration);
- blocks = filterRowGroups(filter, footer.getBlocks(), fileSchema);
- } else {
Review comment:
Sorry for the late response @HyukjinKwon .
Hmm. How likely we are going to keep this code even if move to parquet-mr?
the `ParquetInputSplit` this code depends on is already deprecated and will be
removed in 2.0. Also, with `rowGroupOffsets` it requires the client (as opposed
to tasks) to parse Parquet row group metadata which could get very expensive
(which is the motivation that parquet-mr switched to use `FileSplit`).
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
##########
@@ -199,12 +151,21 @@ public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptCont
*/
protected void initialize(String path, List<String> columns) throws
IOException {
Configuration config = new Configuration();
- config.set("spark.sql.parquet.binaryAsString", "false");
- config.set("spark.sql.parquet.int96AsTimestamp", "false");
+ config.setBoolean(SQLConf.PARQUET_BINARY_AS_STRING().key() , false);
+ config.setBoolean(SQLConf.PARQUET_INT96_AS_TIMESTAMP().key(), false);
this.file = new Path(path);
long length =
this.file.getFileSystem(config).getFileStatus(this.file).getLen();
- ParquetMetadata footer = readFooter(config, file, range(0, length));
+ ParquetReadOptions options = HadoopReadOptions
+ .builder(config)
+ .withRange(0, length)
+ .build();
+
+ ParquetMetadata footer;
+ try (ParquetFileReader reader = ParquetFileReader
Review comment:
Sure, I can do that. I did run `FilterPushdownBenchmark` (see comments
above) before and didn't see regression. I'll do that with the new Parquet
version.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]