Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/791#discussion_r21156020
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -837,11 +837,11 @@ private[spark] class BlockManager(
* Drop a block from memory, possibly putting it on disk if applicable.
Called when the memory
* store reaches its limit and needs to free up space.
*
- * Return the block status if the given block has been updated, else
None.
+ * Return the block status and dropped memory size if the given block
has been updated, else None.
*/
def dropFromMemory(
blockId: BlockId,
- data: Either[Array[Any], ByteBuffer]): Option[BlockStatus] = {
+ data: Either[Array[Any], ByteBuffer]): Option[(BlockStatus, Long)] =
{
logInfo(s"Dropping block $blockId from memory")
val info = blockInfo.get(blockId).orNull
--- End diff --
Hi, as for now, this method is called by doPut 1) and doGetLocal to put
data from disk into memory 2), all this two are got related
blockinfo.synchronized, So will there need used info.synchronized again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]