Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16578#discussion_r150261547
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadSupport.scala
---
@@ -63,9 +74,22 @@ private[parquet] class ParquetReadSupport extends
ReadSupport[UnsafeRow] with Lo
StructType.fromString(schemaString)
}
- val parquetRequestedSchema =
+ val clippedParquetSchema =
ParquetReadSupport.clipParquetSchema(context.getFileSchema,
catalystRequestedSchema)
+ val parquetRequestedSchema = if (parquetMrCompatibility) {
+ // Parquet-mr will throw an exception if we try to read a superset
of the file's schema.
+ // Therefore, we intersect our clipped schema with the underlying
file's schema
--- End diff --
This is interesting because if we don't do nested pruning, the superset of
parquet reading schema like:
```
message spark_schema {
optional group name {
optional binary first (UTF8);
optional binary middle (UTF8);
optional binary last (UTF8);
}
optional binary address (UTF8);
}
```
won't cause any failure.
Once we perform nested pruning, the required parquet schema becomes:
```
message spark_schema {
optional group name {
optional binary middle (UTF8);
}
optional binary address (UTF8);
}
```
Then if we don't remove the "group name" from the required schema, the
failure happens.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]