Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/6423#discussion_r31362756
--- Diff:
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
---
@@ -298,11 +294,9 @@ final class ShuffleBlockFetcherIterator(
// not exist, SPARK-4085). In that case, we should propagate the
right exception so
// the scheduler gets a FetchFailedException.
Try(buf.createInputStream()).map { is0 =>
- val is = blockManager.wrapForCompression(blockId, is0)
- val iter =
serializerInstance.deserializeStream(is).asKeyValueIterator
- CompletionIterator[Any, Iterator[Any]](iter, {
- // Once the iterator is exhausted, release the buffer and set
currentResult to null
- // so we don't release it again in cleanup.
+ // Once the single-element (is0) iterator is exhausted, release
the buffer so that we
+ // don't release it again in cleanup.
+ CompletionIterator[InputStream,
Iterator[InputStream]](Iterator(is0), {
--- End diff --
Just to explore options, what if we returned `buf` (which is a
`ManagedBuffer`) instead of returning an iterator from it? This would push the
cleanup obligations to the caller, who might be in a better position to handle
them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]