szehon-ho commented on code in PR #53149:
URL: https://github.com/apache/spark/pull/53149#discussion_r2557499167
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -6686,15 +6686,27 @@ object SQLConf {
.booleanConf
.createWithDefault(true)
- val MERGE_INTO_SOURCE_NESTED_TYPE_COERCION_ENABLED =
- buildConf("spark.sql.merge.source.nested.type.coercion.enabled")
+ val MERGE_INTO_NESTED_TYPE_COERCION_ENABLED =
+ buildConf("spark.sql.merge.nested.type.coercion.enabled")
.internal()
.doc("If enabled, allow MERGE INTO to coerce source nested types if they
have less" +
"nested fields than the target table's nested types.")
.version("4.1.0")
.booleanConf
.createWithDefault(true)
+ val MERGE_INTO_NESTED_TYPE_UPDATE_BY_FIELD =
+ buildConf("spark.sql.merge.nested.type.assign.by.field")
+ .internal()
+ .doc("If enabled and spark.sql.merge.source.nested.type.coercion.enabled
is true," +
+ "allow MERGE INTO with UPDATE SET * action to set nested structs field
by field. " +
+ "In updated rows, target structs will preserve the original value for
fields missing " +
+ "in the the source struct. If disabled, the entire target struct will
be replaced, " +
+ "and fields missing in the source struct will be null.")
Review Comment:
that sounds good. yea if Spark usually treats struct field as columns, then
makes sense. Yea, MERGE INTO update assigning a struct value smaller than
struct key is new feature in 4.1, so let me change it before we release
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]