mihailom-db commented on code in PR #48585:
URL: https://github.com/apache/spark/pull/48585#discussion_r1812149237


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala:
##########
@@ -335,12 +343,30 @@ abstract class TypeCoercionBase {
       // Return the result after the widen data types have been found for all 
the children
       if (attrIndex >= children.head.output.length) return castedTypes.toSeq
 
-      // For the attrIndex-th attribute, find the widest type
-      val widenTypeOpt = 
findWiderCommonType(children.map(_.output(attrIndex).dataType))
-      castedTypes.enqueue(widenTypeOpt)
+      val outputType = findOutputType(children, attrIndex)
+      castedTypes.enqueue(outputType)
       getWidestTypes(children, attrIndex + 1, castedTypes)
     }
 
+    /** Given children of the operator, determines the output type of the
+     * `attrIndex`-th attribute
+     */
+    private def findOutputType(children: Seq[LogicalPlan], attrIndex: Int): 
Option[DataType] = {

Review Comment:
   I find this weird. So far we always special cased expressions in 
CollationTypeCast. As @cloud-fan we have understanding that collations are only 
touched in that rule. This moves us to making double work. I would say it is 
better to move the widening of collations to CollationTypeCast by special 
casing the setOperations there (except, union, intersect) it is only three 
expressions. This is because we keep code in a cleaner state in that way. What 
do you guys think? @stefankandic @vladanvasi-db 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to