imback82 commented on pull request #28123:
URL: https://github.com/apache/spark/pull/28123#issuecomment-631869859


   > 1. It's not possible to have an accurate cost model to guide this 
optimization. Do we have a heuristic? like coalescing 10000 buckets to 2 is 
very likely to cause regression as parallelism is reduced too much.
   
   A heuristic is discussed 
[here](https://github.com/apache/spark/pull/28123/files#r411809034) to set the 
default value for 
`spark.sql.bucketing.coalesceBucketsInJoin.maxNumBucketsDiff`. And the config 
should help preventing the regression by the reduced parallelism.
   
   > 2. Are we going to support joins with more than 2 tables? e.g. "100 
buckets table" join "50 buckets table" join "10 buckets table".
   
   Good idea. I can do a follow-up PR to improve the current rule (which 
doesn't support nested joins)?
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to