[ 
https://issues.apache.org/jira/browse/SPARK-22255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16200759#comment-16200759
 ] 

Sean Owen commented on SPARK-22255:
-----------------------------------

The purpose of the class is to transfer all the bytes from an InputStream to a 
file/stream. It does so until the input stream closes. To do that it has to 
wait, at times, for input. I guess I don't see how you otherwise achieve this 
without, maybe, polling? which is just another bad way of blocking.

An IOException happens if the input is closed while it's blocked in read, yes, 
but that's normal. You'll see the exception is swallowed as normal if stop() 
has been called on the appender because it expects the input will close.

> SPARK-22255 FileAppender InputStream Read timeout and blocking state
> --------------------------------------------------------------------
>
>                 Key: SPARK-22255
>                 URL: https://issues.apache.org/jira/browse/SPARK-22255
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.2.0
>            Reporter: Mariusz Galus
>            Priority: Minor
>
> The FileAppender logic when reading from InputStream blocks. This can be 
> simply avoided with a InputStream.available() check prior to reading. 
> If this is done, a variable for estimated available bytes needs to be 
> instantiated to use in the conditionals. The conditional for reading from the 
> inputstream and the conditional for appending to the file.
> See: 
> https://github.com/Galus/spark/pull/1/commits/8ee5133c40e3f627ed0ebfb3aa63d5749b5bfdcb



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to