cashmand commented on code in PR #49234:
URL: https://github.com/apache/spark/pull/49234#discussion_r1890821529


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetWriteSupport.scala:
##########
@@ -95,7 +100,19 @@ class ParquetWriteSupport extends WriteSupport[InternalRow] 
with Logging {
 
   override def init(configuration: Configuration): WriteContext = {
     val schemaString = configuration.get(ParquetWriteSupport.SPARK_ROW_SCHEMA)
+    val shreddedSchemaString = 
configuration.get(ParquetWriteSupport.SPARK_VARIANT_SHREDDING_SCHEMA)
     this.schema = StructType.fromString(schemaString)
+    // If shreddingSchemaString is provided, we use that everywhere in the 
writer, except for
+    // setting the spark schema in the Parquet metadata. If it isn't provided, 
it means that there
+    // are no shredded Variant columns, so it is identical to this.schema.
+    this.shreddedSchema = if (shreddedSchemaString == null) {
+      this.schema
+    } else {
+      val v = StructType.fromString(shreddedSchemaString)
+      // A bit awkwardly, the schema string doesn't include metadata to 
identify which struct

Review Comment:
   I'm open to suggestions, but from reading the code, it didn't look like the 
metadata could be passed through the DDL representation, and I couldn't think 
of an alternative approach to identify the shredding for each Variant field. 
Technically, for this PR I guess I could have gotten away with constructing the 
shredded schema in this code directly from the SQLConf, but in the future, I 
think we'll want to do it based on table properties and/or buffered data in the 
task, and I don't think it would make sense for that to go here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to