viirya commented on pull request #30770: URL: https://github.com/apache/spark/pull/30770#issuecomment-747148582
> The problem is, this is completely relying on luck - this doesn't give any help on physical plan. Again the problem exists even without the PR, but then shouldn't we fix the root cause instead of extending the possibility of luck? At least Spark should be able to know there're other executors still keeping the state, and taking into account while planning. BTW, I think it is entirely relying on luck. We can make Spark SS reuse previous stores reliably. I'm working on it. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
