sunchao commented on a change in pull request #34659:
URL: https://github.com/apache/spark/pull/34659#discussion_r779210735
##########
File path:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
##########
@@ -103,9 +104,14 @@ public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptCont
fileReader.setRequestedSchema(requestedSchema);
String sparkRequestedSchemaString =
configuration.get(ParquetReadSupport$.MODULE$.SPARK_ROW_REQUESTED_SCHEMA());
- this.sparkSchema =
StructType$.MODULE$.fromString(sparkRequestedSchemaString);
+ StructType sparkRequestedSchema =
StructType$.MODULE$.fromString(sparkRequestedSchemaString);
+ ParquetToSparkSchemaConverter converter = new
ParquetToSparkSchemaConverter(configuration);
+ this.parquetColumn = converter.convertParquetColumn(requestedSchema,
Review comment:
Thanks, yes I'm aware of the other one: we should not need to allocate
the vectors for definition & repetition levels when the schema is flat. I'm
hoping to address this separately with another PR though - don't want to make
this one too bloated :)
BTW @bersprockets : how can I reproduce the 10-15% performance penalty? I
was using the above code snippet and got almost the same numbers on my machine
with the latest fix.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]