Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54646125
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -648,8 +647,38 @@ private[spark] class BlockManager(
}
/**
- * @return true if the block was stored or false if the block was
already stored or an
- * error occurred.
+ * Retrieve the given block if it exists, otherwise call the provided
`makeIterator` method
+ * to compute the block, persist it, and return its values.
+ *
+ * @return either a BlockResult if the block was successfully cached, or
an iterator if the block
+ * could not be cached.
+ */
+ def getOrElseUpdate(
+ blockId: BlockId,
+ level: StorageLevel,
+ makeIterator: () => Iterator[Any]): Either[BlockResult,
Iterator[Any]] = {
+ // Initially we hold no locks on this block.
+ doPut(blockId, IteratorValues(makeIterator), level,
downgradeToReadLock = true) match {
--- End diff --
I think we can simplify this by moving the acquiring lock logic outside of
`doPut`. If we fail to acquire a write lock for the new block we used to return
`None` in `doPut` anyway, so it would make sense to instead call `doPut` only
if we're able to acquire the write lock. Then you don't need the complicated
return type in `doPut`.
(If you don't want to change all the *other* places where `doPut` is
called, we can just make an overloaded `doPut` method that takes in a
`blockInfo` and assumes the write lock is already held.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]