HeartSaVioR edited a comment on pull request #35574:
URL: https://github.com/apache/spark/pull/35574#issuecomment-1047383695


   Another rough idea in parallel: let's consider the physical nodes in the 
same stage. Unless the stage contains the source node, the stage should have 
triggered shuffle, based on the required child distribution of the first 
physical node (and required child distributions from remaining nodes would 
satisfy the output partitioning). If all nodes in the same stage require 
`ClusteredDistribution`s but having different group keys, it might be sensible 
expectation that picking up most number of grouping keys in this stage to the 
represent the required child distribution for "this stage" would bring less 
skews. We are going to introduce shuffle in any way, so let's make the 
unavoidable shuffle be (heuristically) most effective.
   
   This only works from the second stage - if the required child distribution 
is satisfied from the source node, there is no chance to inject the shuffle. 
Above rough idea could enforce injecting the shuffle if the number of 
partitions on the source node is too low and after that this idea will take in 
effect. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to