Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/14278#discussion_r71671818
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -136,7 +137,9 @@ public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptCont
ReadSupport.ReadContext readContext = readSupport.init(new InitContext(
taskAttemptContext.getConfiguration(),
toSetMultiMap(fileMetadata), fileSchema));
this.requestedSchema = readContext.getRequestedSchema();
- this.sparkSchema = new
ParquetSchemaConverter(configuration).convert(requestedSchema);
+ String sparkRequestedSchemaString =
+
configuration.get(ParquetReadSupport$.MODULE$.SPARK_ROW_REQUESTED_SCHEMA());
+ this.sparkSchema =
StructType$.MODULE$.fromString(sparkRequestedSchemaString);
--- End diff --
Actually it's safer when the Parquet requested schema conforms to the
actual physical file to be read. Normally, we shouldn't care about logical
types (those with annotations) at the level of Parquet record reader. It's the
upper level engine's responsibility to convert basic types like `int32` into
logical types like `INT_8` and `INT_16`. The vectorized reader has to mix them
up because we need to construct value vectors of proper types at this level.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]