[ 
https://issues.apache.org/jira/browse/SPARK-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566222#comment-14566222
 ] 

Tathagata Das commented on SPARK-7942:
--------------------------------------

That is a very good idea. In fact please update the JIRA title to describe that 
feature. If there were receivers started, and all the receivers have shutdown, 
then stop the StreamingContext and throw error such that ssc.awaitTermination 
exits. This will be a good feature to add. 

> Receiver's life cycle is inconsistent with streaming job.
> ---------------------------------------------------------
>
>                 Key: SPARK-7942
>                 URL: https://issues.apache.org/jira/browse/SPARK-7942
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>    Affects Versions: 1.4.0
>            Reporter: SaintBacchus
>
> Streaming consider the receiver as a common spark job, thus if an error 
> occurs in the receiver's  logical(after 4 times(default) retries ), streaming 
> will no longer get any data but the streaming job is still running. 
> A general scenario is that: we config the 
> `spark.streaming.receiver.writeAheadLog.enable` as true to use the 
> `ReliableKafkaReceiver` but do not set the checkpoint dir. Then the receiver 
> will soon be shut down but the streaming is alive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to