otterc commented on a change in pull request #32007:
URL: https://github.com/apache/spark/pull/32007#discussion_r611987404
##########
File path: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
##########
@@ -728,6 +728,24 @@ private[spark] class BlockManager(
}
}
+ /**
+ * Get the local merged shuffle block data for the given block ID as
multiple chunks.
+ * A merged shuffle file is divided into multiple chunks according to the
index file.
+ * Instead of reading the entire file as a single block, we split it into
smaller chunks
+ * which will be memory efficient when performing certain operations.
+ */
+ override def getMergedBlockData(blockId: ShuffleBlockId): Seq[ManagedBuffer]
= {
+ shuffleManager.shuffleBlockResolver.getMergedBlockData(blockId)
+ }
+
+ /**
+ * Get the local merged shuffle block metada data for the given block ID.
+ */
+ def getMergedBlockMeta(blockId: ShuffleBlockId): MergedBlockMeta = {
+ shuffleManager.shuffleBlockResolver.getMergedBlockMeta(blockId)
+ }
+
+
Review comment:
@zhouyejoe This is missing a change where `hostLocalDirManager` needs to
initialized at line 505 when push based shuffle is enabled. Like this:
```
hostLocalDirManager = {
// PART OF SPARK-33350
if ((conf.get(config.SHUFFLE_HOST_LOCAL_DISK_READING_ENABLED) &&
!conf.get(config.SHUFFLE_USE_OLD_FETCH_PROTOCOL))
|| Utils.isPushBasedShuffleEnabled(conf)) {
Some(new HostLocalDirManager(
futureExecutionContext,
conf.get(config.STORAGE_LOCAL_DISK_BY_EXECUTORS_CACHE_SIZE),
blockStoreClient))
} else {
None
}
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]