vanzin commented on a change in pull request #23688: [SPARK-25035][Core] 
Avoiding memory mapping at disk-stored blocks replication
URL: https://github.com/apache/spark/pull/23688#discussion_r254902642
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
 ##########
 @@ -221,6 +221,175 @@ private[spark] class BlockManager(
     new BlockManager.RemoteBlockDownloadFileManager(this)
   private val maxRemoteBlockToMem = 
conf.get(config.MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM)
 
+  private abstract class BlockStoreUpdater[T](
+      blockSize: Long,
+      blockId: BlockId,
+      level: StorageLevel,
+      classTag: ClassTag[T],
+      tellMaster: Boolean,
+      keepReadLock: Boolean) {
+
+    protected def byteBuffer: ChunkedByteBuffer
 
 Review comment:
   I see what you're doing but it looks a little weird. The semantics here are 
that this method should always return the /same/ buffer, like a `val` (at least 
to avoid creating unnecessary buffers).
   
   So it needs at least a comment explaining that. But that makes me think that 
the abstraction here is a little funny and this could be solved some other 
way... can't really suggest something off the top of my head, though.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to