advancedxy commented on a change in pull request #23638: [SPARK-26713][CORE] 
Interrupt pipe IO threads in PipedRDD when task is finished
URL: https://github.com/apache/spark/pull/23638#discussion_r250917585
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/storage/ShuffleBlockFetcherIterator.scala
 ##########
 @@ -372,7 +372,7 @@ final class ShuffleBlockFetcherIterator(
     logDebug("Got local blocks in " + Utils.getUsedTimeMs(startTime))
   }
 
-  override def hasNext: Boolean = numBlocksProcessed < numBlocksToFetch
+  override def hasNext: Boolean = !isZombie && (numBlocksProcessed < 
numBlocksToFetch)
 
 Review comment:
   > (you wrote stderr writer but I think it is typo) 
   
   Sorry for the typo.
   > If so, it stops consuming ShuffleBlockFetcherIterator anymore. Isn't it 
enough to solve that?
   
   Yes, for the ShuffledRDD + PipedRRDD case , the cleanup logic in PipedRDD is 
enough to solve the potential leak.
   However I am thinking that ShuffledRDD could be transformed with any 
operations, there may be other cases that `ShuffledBlockFetcherIterator` is 
cleaned up but still being consumed. So, making the 
`ShuffledBlockFetcherIterator` defensive.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to