cashmand commented on code in PR #49234:
URL: https://github.com/apache/spark/pull/49234#discussion_r1894360846
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -4644,6 +4644,22 @@ object SQLConf {
.booleanConf
.createWithDefault(false)
+ val VARIANT_WRITE_SHREDDING_ENABLED =
+ buildConf("spark.sql.variant.writeShredding.enabled")
+ .internal()
+ .doc("When true, the Parquet reader is allowed to write shredded
variant. ")
Review Comment:
The intent of this conf is to be a global kill switch for the write
shredding feature, and that we'll pass a write-specific conf to actually
control how we shred - e.g. have the shredding determined by sampling data in
the task, or specify a shredding schema for each Variant column.
Right now, the conf only has an effect if
`VARIANT_FORCE_SHREDDING_SCHEMA_FOR_TEST` is also set to a non-empty value. I
could remove it if we don't think there's any value to having a global kill
switch like this. @cloud-fan, any opinion?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]