cashmand commented on code in PR #49234:
URL: https://github.com/apache/spark/pull/49234#discussion_r1896772605
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetWriteSupport.scala:
##########
@@ -95,7 +100,19 @@ class ParquetWriteSupport extends WriteSupport[InternalRow]
with Logging {
override def init(configuration: Configuration): WriteContext = {
val schemaString = configuration.get(ParquetWriteSupport.SPARK_ROW_SCHEMA)
+ val shreddedSchemaString =
configuration.get(ParquetWriteSupport.SPARK_VARIANT_SHREDDING_SCHEMA)
this.schema = StructType.fromString(schemaString)
+ // If shreddingSchemaString is provided, we use that everywhere in the
writer, except for
+ // setting the spark schema in the Parquet metadata. If it isn't provided,
it means that there
+ // are no shredded Variant columns, so it is identical to this.schema.
+ this.shreddedSchema = if (shreddedSchemaString == null) {
+ this.schema
+ } else {
+ val v = StructType.fromString(shreddedSchemaString)
+ // A bit awkwardly, the schema string doesn't include metadata to
identify which struct
Review Comment:
Oh, thanks. I had looked at the `json` method in StructType, and forgot that
the metadata is in StructField, not StructType, so I missed that it is writing
the metadata there. Let me clean this up.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]