Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/10577#discussion_r49544284
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scala
---
@@ -200,41 +200,60 @@ object HiveTypeCoercion {
*/
object WidenSetOperationTypes extends Rule[LogicalPlan] {
- private[this] def widenOutputTypes(
- planName: String,
- left: LogicalPlan,
- right: LogicalPlan): (LogicalPlan, LogicalPlan) = {
- require(left.output.length == right.output.length)
-
- val castedTypes = left.output.zip(right.output).map {
- case (lhs, rhs) if lhs.dataType != rhs.dataType =>
- findWiderTypeForTwo(lhs.dataType, rhs.dataType)
- case other => None
+ private def widenOutputTypes(children: Seq[LogicalPlan]):
Seq[LogicalPlan] = {
+ require(children.forall(_.output.length ==
children.head.output.length))
+
+ // Get a sequence of data types, each of which is the widest type of
this specific attribute
+ // in all the children
+ val castedTypes: Seq[Option[DataType]] = {
--- End diff --
instead of doing it in batch, can we just iterate children multiple times
and handle one column each time? We can stop iterating once we find a column is
not type-coerce-able among children. Also we should not touch the original plan
if there exists any non-type-coerce-able column.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]