HyukjinKwon commented on code in PR #38443:
URL: https://github.com/apache/spark/pull/38443#discussion_r1009014562


##########
sql/core/src/test/scala/org/apache/spark/sql/execution/command/v2/AlterTableAddPartitionSuite.scala:
##########
@@ -129,7 +129,9 @@ class AlterTableAddPartitionSuite
     withNamespaceAndTable("ns", "tbl") { t =>
       sql(s"CREATE TABLE $t (c int) $defaultUsing PARTITIONED BY (p int)")
 
-      withSQLConf(SQLConf.SKIP_TYPE_VALIDATION_ON_ALTER_PARTITION.key -> 
"true") {
+      withSQLConf(
+          SQLConf.SKIP_TYPE_VALIDATION_ON_ALTER_PARTITION.key -> "true",
+          SQLConf.ANSI_ENABLED.key -> "false") {

Review Comment:
   @cloud-fan @gengliangwang @ulysses-you, this is a temporary fix to recover 
the test case.
   
   The root case here seems like we don't care about 
`spark.sql.storeAssignmentPolicy` in V2:
   
   
https://github.com/apache/spark/blob/1dd0ca23f64acfc7a3dc697e19627a1b74012a2d/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolvePartitionSpec.scala#L81
   
   with using a plain cast (that's controlled by the ANSI flag).
   
   Whereas seems like V1 uses the casts from 
`PartitioningUtils.castPartitionSpec` that's independent from 
`spark.sql.storeAssignmentPolicy`. While I guess we should use the same cast, I 
made this fix first to make the test pass in case there's some more things to 
discuss about this.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to