hzyangkai commented on pull request #30103: URL: https://github.com/apache/spark/pull/30103#issuecomment-726258088
> I haven't reviewed yet, but test failure is in executor monitor tests so likely related to this change, please take a look. thanks for your reply, i have some idea for the test failure. I think the parameter spark.dynamicAllocation.shuffleTracking.timeout is unreasonable. According to the spark documentation:this option can be used to control when to time out executors even when they are storing shuffle data.it indicates that the downstream task will fail if the execution time exceeds this configuration. But I think this is more reasonable: if all shuffles contained in an executor are not used by an active job, then the executor can be deleted at this time. i am Looking forward to your reply. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
