Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/860#discussion_r13170390
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -329,8 +329,26 @@ private[spark] class BlockManager(
* never deletes (recent) items.
*/
def getLocalFromDisk(blockId: BlockId, serializer: Serializer):
Option[Iterator[Any]] = {
- diskStore.getValues(blockId, serializer).orElse(
- sys.error("Block " + blockId + " not found on disk, though it should
be"))
+
+ // Reducer may need to read many local shuffle blocks and will wrap
them into Iterators
+ // at the beginning. The wrapping will cost some memory(compression
instance
+ // initialization, etc.). Reducer read shuffle blocks one by one so we
could do the
+ // wrapping lazily to save memory.
+ class LazyProxyIterator(f: => Iterator[Any]) extends Iterator[Any] {
+
+ lazy val proxy = f
+
+ override def hasNext: Boolean = proxy.hasNext
+
+ override def next(): Any = proxy.next()
+ }
+
+ if (diskStore.contains(blockId)) {
+ Some(new LazyProxyIterator(diskStore.getValues(blockId,
serializer).get))
--- End diff --
Doesn't this introduce a race condition because you're calling `contains`
before `getValues`? If the block is removed in that time, you'll have a
problem. It would be better to change BlockManager.dataDeserialize to use the
lazy iterator.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---