attilapiros commented on a change in pull request #29211:
URL: https://github.com/apache/spark/pull/29211#discussion_r463783281
##########
File path:
core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala
##########
@@ -327,4 +354,28 @@ private[storage] class BlockManagerDecommissioner(
}
logInfo("Stopped storage decommissioner")
}
+
+ /*
+ * Returns the last migration time and a boolean for if all blocks have
been migrated.
+ * If there are any tasks running since that time the boolean may be
incorrect.
+ */
+ private[storage] def lastMigrationInfo(): (Long, Boolean) = {
+ if (stopped || (stoppedRDD && stoppedShuffle)) {
+ (System.nanoTime(), true)
+ } else {
+ // Chose the min of the running times.
+ val lastMigrationTime = if (
+ conf.get(config.STORAGE_DECOMMISSION_SHUFFLE_BLOCKS_ENABLED) &&
+ conf.get(config.STORAGE_DECOMMISSION_RDD_BLOCKS_ENABLED)) {
+ Math.min(lastRDDMigrationTime, lastShuffleMigrationTime)
Review comment:
Just a little misunderstanding remained:
> So currently the block manager doesn't know if there are any running
tasks, and I'm not sure it's better to expose that into the block manager.
I do not think that is needed: I would implement the simplified solution
within the shutdown thread.
I advise some improved robustness by a more testable or easier provable code
(of which easier to argue about) to make it more suppartable/extendable.
I am sorry but regarding this code part your reviewer(s) (at least me
myself) failed at first :( that is an indication we should do something. Yes we
can try to improve the documentation first.
@agrawaldevesh: what is your opinion/idea?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]