Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/2134#discussion_r16832952
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -291,54 +376,71 @@ private[spark] class MemoryStore(blockManager:
BlockManager, maxMemory: Long)
* an Array if deserialized is true or a ByteBuffer otherwise. Its
(possibly estimated) size
* must also be passed by the caller.
*
- * Synchronize on `accountingLock` to ensure that all the put requests
and its associated block
- * dropping is done by only on thread at a time. Otherwise while one
thread is dropping
- * blocks to free memory for one block, another thread may use up the
freed space for
- * another block.
- *
+ * In order to drop old blocks in parallel, we will first mark the
blocks that can be dropped
+ * when there is not enough memory.
+ *
* Return whether put was successful, along with the blocks dropped in
the process.
*/
- private def tryToPut(
- blockId: BlockId,
- value: Any,
- size: Long,
- deserialized: Boolean): ResultWithDroppedBlocks = {
- /* TODO: Its possible to optimize the locking by locking entries only
when selecting blocks
- * to be dropped. Once the to-be-dropped blocks have been selected,
and lock on entries has
- * been released, it must be ensured that those to-be-dropped blocks
are not double counted
- * for freeing up more space for another block that needs to be put.
Only then the actually
- * dropping of blocks (and writing to disk if necessary) can proceed
in parallel. */
+ private def tryToPut(
+ blockId: BlockId,
--- End diff --
same here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]