advancedxy commented on a change in pull request #23638: [SPARK-26713][CORE]
Release pipe IO threads in PipedRDD when task is finished
URL: https://github.com/apache/spark/pull/23638#discussion_r250659066
##########
File path: core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala
##########
@@ -156,7 +157,34 @@ private[spark] class PipedRDD[T: ClassTag](
out.close()
}
}
- }.start()
+ }
+ stdinWriterThread.start()
+
+ def cleanUpIOThreads(): Unit = {
+ if (proc.isAlive) {
+ proc.destroy()
+ }
+ if (stdinWriterThread.isAlive) {
+ stdinWriterThread.stop()
+ }
+
+ if (stderrReaderThread.isAlive) {
+ stderrReaderThread.stop()
+ }
+ }
+
+ // stops stdin writer and stderr read threads when the corresponding task
is finished as a safe
+ // belt. Otherwise, these threads could outlive the task's lifetime. For
example:
+ // val pipeRDD = sc.range(1, 100).pipe(Seq("cat"))
+ // val abnormalRDD = pipeRDD.mapPartitions(_ => Iterator.empty)
+ // the iterator generated by PipedRDD is never involved. If the parent
RDD's iterator is time
+ // consuming to generate(ShuffledRDD's shuffle operation for example), the
outlived stdin writer
+ // thread will consume significant memory and cpu time. Also, there's race
condition for
+ // ShuffledRDD + PipedRDD if the subprocess command is failed. The task
will be marked as failed
+ // , ShuffleBlockFetcherIterator will be cleaned up at task completion,
which may hangs
+ // ShuffleBlockFetcherIterator.next() call. The failed tasks' stdin writer
never exits and leaks
+ // significant memory held in ShufflerReader.
+ context.addTaskCompletionListener[Unit](_ => cleanUpIOThreads())
Review comment:
Yes, will fix that.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]