Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/16578#discussion_r148725914
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadSupport.scala
---
@@ -63,9 +74,22 @@ private[parquet] class ParquetReadSupport extends
ReadSupport[UnsafeRow] with Lo
StructType.fromString(schemaString)
}
- val parquetRequestedSchema =
+ val clippedParquetSchema =
ParquetReadSupport.clipParquetSchema(context.getFileSchema,
catalystRequestedSchema)
+ val parquetRequestedSchema = if (parquetMrCompatibility) {
+ // Parquet-mr will throw an exception if we try to read a superset
of the file's schema.
+ // Therefore, we intersect our clipped schema with the underlying
file's schema
--- End diff --
We can request a read of a superset of a file's fields for the case of a
partitioned table with partitions with a subset of the table's fields. See my
related comment and example in `ParquetRowConverter.scala`.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]