[
https://issues.apache.org/jira/browse/SPARK-35027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17372425#comment-17372425
]
Jack Hu edited comment on SPARK-35027 at 7/1/21, 6:39 AM:
----------------------------------------------------------
Of course, the "stop" in FileAppender does nothing but set a flag.
The exception will be thrown in
"[appendStreamToFile|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/util/logging/FileAppender.scala#L59]",
but the cloure in finally only closes the output stream (to file), but leave
the "inputStream" open., which is the pipe's output stream.
was (Author: jhu):
Of course, the "stop" in FileAppender does nothing but set a flag.
The exception will be thrown in "appendStreamToFile", but the cloure in finally
only closes the output stream (to file), but leave the "inputStream" open.,
which is the pipe's output stream.
> Close the inputStream in FileAppender when writing the logs failure
> -------------------------------------------------------------------
>
> Key: SPARK-35027
> URL: https://issues.apache.org/jira/browse/SPARK-35027
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 3.1.1
> Reporter: Jack Hu
> Priority: Major
>
> In Spark Cluster, the ExecutorRunner uses FileAppender to redirect the
> stdout/stderr of executors to file, when the writing processing is failure
> due to some reasons: disk full, the FileAppender will only close the input
> stream to file, but leave the pipe's stdout/stderr open, following writting
> operation in executor side may be hung.
> need to close the inputStream in FileAppender ?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]