srowen commented on issue #23697: [SPARK-26758][core]Idle Executors are not 
getting killed after spark.dynamiAllocation.executorIdleTimeout value
URL: https://github.com/apache/spark/pull/23697#issuecomment-458978631
 
 
   I get the idea, but I note that the current logic seems to be on purpose 
according to the comments. The comment would have to be updated too.
   
   So the scenario here is that no stages have been submitted, but the first 
scheduled check at 60s won't kill the idle executors. The second one at 120s 
will right? Yeah that's a minor thing, but worth trying to fix.
   
   This change has some other effects like killing idle executors before 
deciding whether more are needed. Maybe that causes it to kill and then 
recreate an executor to match a minimum, which isn't great, but, at the same 
time, currently it will check whether it needs to add executors and then kill 
expired ones right after, which also seems odd.
   
   Another possible fix is simply to set initializing to `false` in the 
_second_ call to `schedule()` (the first one that happens after the initial one 
at time 0) but maybe that's also kind of complex to reason about.
   
   (By the way you could also delete `totalRunningTasks` while here as it's not 
used, but not important.)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to