holdenk commented on a change in pull request #29211:
URL: https://github.com/apache/spark/pull/29211#discussion_r460425746
##########
File path:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
##########
@@ -277,12 +282,52 @@ private[spark] class CoarseGrainedExecutorBackend(
if (executor != null) {
executor.decommission()
}
- logInfo("Done decommissioning self.")
+ // Shutdown the executor once all tasks are gone & any configured
migrations completed.
+ // Detecting migrations completion doesn't need to be perfect and we
want to minimize the
+ // overhead for executors that are not in decommissioning state as
overall that will be
+ // more of the executors. For example, this will not catch a block which
is already in
+ // the process of being put from a remote executor before migration
starts. This trade-off
+ // is viewed as acceptable to minimize introduction of any new locking
structures in critical
+ // code paths.
+
+ val shutdownThread = new Thread("wait-for-blocks-to-migrate") {
+ var lastTaskRunningTime = System.nanoTime()
+ val sleep_time = 1000 // 1s
+
+ while (true) {
Review comment:
Yeah good catch. This will block on the decommissioning message and we
want to return that. Let me think of how we can check that we’re not blocking
in the decommission message.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]