Github user ioana-delaney commented on a diff in the pull request:
https://github.com/apache/spark/pull/15363#discussion_r106074890
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CostBasedJoinReorder.scala
---
@@ -51,6 +51,11 @@ case class CostBasedJoinReorder(conf: CatalystConf)
extends Rule[LogicalPlan] wi
def reorder(plan: LogicalPlan, output: AttributeSet): LogicalPlan = {
val (items, conditions) = extractInnerJoins(plan)
+ // Find the star schema joins. Currently, it returns the star join
with the largest
+ // fact table. In the future, it can return more than one star join
(e.g. F1-D1-D2
+ // and F2-D3-D4).
+ val starJoinPlans = StarSchemaDetection(conf).findStarJoins(items,
conditions.toSeq)
--- End diff --
@wzhfy Iâve looked into moving the star reordering at the end of the
optimization phase. Star reordering uses the existing
```ReorderJoin.createOrderedJoin``` method to construct the final plan once a
star join is discovered. This method only handles specific types of plans, and
doesnât recognize the plan layout in the last phase of the Optimizer. Writing
a new join reordering method for this purpose would not make too much sense
since star joins are to be used by the existing planning strategies.
I suggest to keep the current logic and, next, I can look into integrating
the star plans with your new DP planning. Once thatâs tested, we can probably
remove the star schema call from ```ReorderJoin``` planning rule. Please let me
know what you think.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]