Github user mallman commented on a diff in the pull request: https://github.com/apache/spark/pull/16499#discussion_r103509038 --- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala --- @@ -1018,7 +1025,9 @@ private[spark] class BlockManager( try { replicate(blockId, bytesToReplicate, level, remoteClassTag) } finally { - bytesToReplicate.dispose() + if (!level.useOffHeap) { --- End diff -- Hi guys. Sorry for the delay on the update. I started down the path that I proposed and it resulted in too many awkward changes in method signatures downstream. I don't think this is a viable step forward. As another option, we could dispose the buffer if and only if it has a non-null `fd` field. Since that field is private, we would have to call it by reflection. I'd also include a unit test to validate that the field exists as expected to guard against internal changes in future versions of Java. On a broader level, I wonder if callers of `ChunkedByteBuffer.dispose` method understand that it will dispose of non-memory-mapped direct buffers? The documentation of that method suggests it's only supposed to dispose of memory-mapped files in the strict sense (those actually memory mapped against a file descriptor by the OS). If other methods are accidentally calling this method on non-memory-mapped direct buffers, that suggests to me we need to push the fix to that method (or actually the StorageUtils.dispose() method). What do you think of that?
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org