mgaido91 commented on a change in pull request #24442: [SPARK-27547][SQL] fix
DataFrame self-join problems
URL: https://github.com/apache/spark/pull/24442#discussion_r278034421
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##########
@@ -876,7 +885,30 @@ class Dataset[T] private[sql](
* @since 2.0.0
*/
def join(right: Dataset[_]): DataFrame = withPlan {
- Join(logicalPlan, right.logicalPlan, joinType = Inner, None, JoinHint.NONE)
+ val (joinLeft, joinRight) = prepareJoinPlan(this, right)
+ Join(joinLeft, joinRight, joinType = Inner, None, JoinHint.NONE)
+ }
+
+ // Called by `Dataset#join`, to attach the Dataset id to the logical plan,
so that we
+ // can resolve column reference correctly later. See
`ResolveDatasetColumnReference`.
+ private def createPlanWithDatasetId(): LogicalPlan = {
+ if
(!sparkSession.sessionState.conf.getConf(SQLConf.RESOLVE_DATASET_COLUMN_REFERENCE))
{
+ return logicalPlan
+ }
+
+ // The alias should start with `SubqueryAlias.HIDDEN_ALIAS_PREFIX`, so
that `SubqueryAlias` can
+ // recognize it and keep the output qualifiers unchanged.
+
SubqueryAlias(s"${SubqueryAlias.HIDDEN_ALIAS_PREFIX}${Dataset.ID_PREFIX}_$id",
logicalPlan)
Review comment:
I think that could be an option. And for the moment we could add it only to
the child/children of a join since we only need it there. But I see there is no
guarantee that the plan(s) are not replaces/removed during the
analysis/optimization phase, so it may not be doable indeed.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]