mgaido91 commented on a change in pull request #23531: [SPARK-24497][SQL] 
Support recursive SQL query
URL: https://github.com/apache/spark/pull/23531#discussion_r247546282
 
 

 ##########
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
 ##########
 @@ -289,13 +294,13 @@ abstract class TreeNode[BaseType <: TreeNode[BaseType]] 
extends Product {
   /**
    * Returns a copy of this node where `f` has been applied to all the nodes 
children.
    */
-  def mapChildren(f: BaseType => BaseType): BaseType = {
-    if (children.nonEmpty) {
+  def mapChildren(f: BaseType => BaseType, forceCopy: Boolean = false): 
BaseType = {
 
 Review comment:
   ok, now I see the issue here. But I think this is not the best way to go for 
that. I think this is the same reason why you are adding some exclusions in the 
prepare method (please correct me if I am wrong). I'd rather introduce a 
`clean` method in `SparkPlan` which by default does nothing, but in case of 
broadcasts cleans the broadcast and hence force recomputation at the next step. 
What do you think? cc @cloud-fan @dongjoon-hyun  @gatorsmile fot their opinion 
on this too

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to