holdenk commented on a change in pull request #29211:
URL: https://github.com/apache/spark/pull/29211#discussion_r463965880
##########
File path:
core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala
##########
@@ -327,4 +354,28 @@ private[storage] class BlockManagerDecommissioner(
}
logInfo("Stopped storage decommissioner")
}
+
+ /*
+ * Returns the last migration time and a boolean for if all blocks have
been migrated.
+ * If there are any tasks running since that time the boolean may be
incorrect.
+ */
+ private[storage] def lastMigrationInfo(): (Long, Boolean) = {
+ if (stopped || (stoppedRDD && stoppedShuffle)) {
+ (System.nanoTime(), true)
+ } else {
+ // Chose the min of the running times.
+ val lastMigrationTime = if (
+ conf.get(config.STORAGE_DECOMMISSION_SHUFFLE_BLOCKS_ENABLED) &&
+ conf.get(config.STORAGE_DECOMMISSION_RDD_BLOCKS_ENABLED)) {
+ Math.min(lastRDDMigrationTime, lastShuffleMigrationTime)
Review comment:
So the approach in that PR re-opens the race condition were preventing
here. I would really rather not do that.
I’d also like us to make progress here though, so we can temporarily accept
the race condition and file a JIRA and revisit it later as a stand-alone item
if y’all are not comfortable with any of the ways to avoid the race.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]