hanyucui commented on a change in pull request #31256:
URL: https://github.com/apache/spark/pull/31256#discussion_r560705071



##########
File path: 
core/src/main/scala/org/apache/spark/scheduler/dynalloc/ExecutorMonitor.scala
##########
@@ -546,7 +546,7 @@ private[spark] class ExecutorMonitor(
           } else {
             Long.MaxValue
           }
-          math.min(_cacheTimeout, _shuffleTimeout)
+          math.max(_cacheTimeout, _shuffleTimeout)

Review comment:
       @dongjoon-hyun Yeah, you are right. This is not a solution but only an 
illustration. I wish there was a way to prevent this from bothering admins.
   
   In my case, I enable dynamic allocation on K8s. I set 
`spark.dynamicAllocation.shuffleTracking.timeout` to a small number so that I 
can kill the executors earlier but set 
`spark.dynamicAllocation.cachedExecutorIdleTimeout` to a larger value since 
some users would like to keep their cached data around for longer. I expected 
the executors to be killed when the larger timeout is hit, not the smaller one, 
but the current behavior is the opposite. I understand different users have 
different requirements and this might totally make sense for them. No action is 
needed on your side -- I will think it through first.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to