dtenedor commented on code in PR #37430:
URL: https://github.com/apache/spark/pull/37430#discussion_r944812583
##########
sql/core/src/test/scala/org/apache/spark/sql/sources/InsertSuite.scala:
##########
@@ -863,25 +863,17 @@ class InsertSuite extends DataSourceTest with
SharedSparkSession {
}
test("Allow user to insert specified columns into insertable view") {
- withSQLConf(SQLConf.USE_NULLS_FOR_MISSING_DEFAULT_COLUMN_VALUES.key ->
"true") {
- sql("INSERT OVERWRITE TABLE jsonTable SELECT a FROM jt")
- checkAnswer(
- sql("SELECT a, b FROM jsonTable"),
- (1 to 10).map(i => Row(i, null))
- )
-
- sql("INSERT OVERWRITE TABLE jsonTable(a) SELECT a FROM jt")
- checkAnswer(
- sql("SELECT a, b FROM jsonTable"),
- (1 to 10).map(i => Row(i, null))
- )
+ sql("INSERT OVERWRITE TABLE jsonTable(a) SELECT a FROM jt")
+ checkAnswer(
+ sql("SELECT a, b FROM jsonTable"),
+ (1 to 10).map(i => Row(i, null))
Review Comment:
As discussed offline, this is a change but not a breaking one because if
this `insert into foo (a) values (1)` command fails in Spark 3.3, but now
succeeds, it is changing an error case into a successful case. This is part of
the intentional behavior change of the column DEFAULT project.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]