aokolnychyi commented on a change in pull request #31835:
URL: https://github.com/apache/spark/pull/31835#discussion_r594757442



##########
File path: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/AnalysisSuite.scala
##########
@@ -671,6 +671,19 @@ class AnalysisSuite extends AnalysisTest with Matchers {
       Project(Seq(UnresolvedAttribute("temp0.a"), 
UnresolvedAttribute("temp1.a")), join))
   }
 
+  test("SPARK-34741: Avoid ambiguous reference in MergeIntoTable") {
+    val cond = 'a > 1
+    assertAnalysisError(

Review comment:
       I am not sure I got it. Spark resolves keys in UPDATE assignments using 
only the target table and the values using both the target and the source 
table. If we encounter `UPDATE SET a = a`, it seems reasonable to get an 
exception as the value is indeed ambiguous.
   
   But how does the dedup step helps us here? The test below in 
`ReplaceNullWithFalseInPredicateSuite` covers a case like `UPDATE SET a = 
source.a` which is not ambiguous. Will that test succeed even without the dedup 
logic added in this PR?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to