Github user cenyuhai commented on a diff in the pull request:
https://github.com/apache/spark/pull/11546#discussion_r55668498
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -227,6 +228,17 @@ private[spark] class Executor(
logError(errMsg)
}
}
+
+ if (releasedLocks.nonEmpty) {
+ val errMsg =
+ s"${releasedLocks.size} block locks were not released by TID
= $taskId:\n" +
--- End diff --
In my production environment, when the storage memory is full, there is a
great probability of deadlock. It is a temporary patch because JoshRosen add a
read/write lock for block in https://github.com/apache/spark/pull/10705 for
Spark 2.0.
Two theads are removing the same block which result in deadlock.
BlockManager will first lock MemoryManager and wait to lock BlockInfo in
function 'dropFromMemory', Execturo task lock BlockInfo and wait to lock
MemoryManager calling 'memstore.remove(block)' in function 'removeBlock' or
function 'removeOldBlocks'.
So just a ConcurrentHashMap to record the locks by tasks. In case of
failure, release all lock after task complete.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]