wankunde commented on a change in pull request #34629:
URL: https://github.com/apache/spark/pull/34629#discussion_r767441206
##########
File path: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
##########
@@ -620,11 +620,13 @@ private[spark] class BlockManager(
* Note that this method must be called without any BlockInfo locks held.
*/
def reregister(): Unit = {
- // TODO: We might need to rate limit re-registering.
- logInfo(s"BlockManager $blockManagerId re-registering with master")
- master.registerBlockManager(blockManagerId,
diskBlockManager.localDirsString, maxOnHeapMemory,
- maxOffHeapMemory, storageEndpoint)
- reportAllBlocks()
+ if (!SparkEnv.get.isStopped) {
Review comment:
@Ngone51 Thanks for you review.
Yes, this PR can not fix the issue above, but I also think that adding
`!SparkEnv.get.isStopped` constraint is helpful as I have found several
executors re-register when they are shutting down by driver.
I very agree to fix this issue in `HeartbeatReceiver` and this PR can be
closed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]