yikf commented on pull request #34916: URL: https://github.com/apache/spark/pull/34916#issuecomment-995586845
> I'm not sure if this is a good idea. Yes, you save one shuffle, but you also need to collect data to the driver side and distribute it later to run table insertion, which I don't think is cheaper than a shuffle, and is probably more fragile (causes driver OOM) 1. limit scene which data is less is not prone occur OOM, and CollectLimitExec's outputPartitioning is `SinglePartition`, It is similar to Driver collect limit. 2. If collectExec's shuffle reader node and insert are on the same node, it does reduce exactly the cost of distribution, but if it is not on the same node, it also needs to be distributed. However, the cost of distribution should be less than that of the shuffle operator. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
