Github user mgaido91 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21403#discussion_r190217120
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -45,6 +46,10 @@ object RewritePredicateSubquery extends
Rule[LogicalPlan] with PredicateHelper {
private def getValueExpression(e: Expression): Seq[Expression] = {
e match {
case cns : CreateNamedStruct => cns.valExprs
+ case Literal(struct: InternalRow, dt: StructType) if
dt.isInstanceOf[StructType] =>
+ dt.zipWithIndex.map { case (field, idx) => Literal(struct.get(idx,
field.dataType)) }
+ case a @ AttributeReference(_, dt: StructType, _, _) =>
--- End diff --
I see. Then I think that the example reported in the JIRA should be
considered an invalid query, since the number of elements of the outside value
is different from the one inside the query. So we should throw an
AnalysisException for that case. Do you agree with this approach?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]