[ 
https://issues.apache.org/jira/browse/SPARK-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13980595#comment-13980595
 ] 

Tathagata Das edited comment on SPARK-1340 at 4/25/14 1:34 AM:
---------------------------------------------------------------

I havent explicitly tested this, but this should be fixed after after a whole 
refactoring in the receiver API done in https://github.com/apache/spark/pull/300

To elaborate further, the new refactored receiver ensures that the task that 
launches the receiver does not complete until the receiver is explicitly 
shutdown. So if the receiver fails with an exception it should get relaunched. 
Well, ideally. This still needs to be tested.



was (Author: tdas):
I havent explicitly tested this, but this should be fixed after after a whole 
refactoring in the receiver API done in https://github.com/apache/spark/pull/300


> Some Spark Streaming receivers are not restarted when worker fails
> ------------------------------------------------------------------
>
>                 Key: SPARK-1340
>                 URL: https://issues.apache.org/jira/browse/SPARK-1340
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>    Affects Versions: 0.9.0
>            Reporter: Tathagata Das
>            Assignee: Tathagata Das
>            Priority: Critical
>
> For some streams like Kafka stream, the receiver do not get restarted if the 
> worker running the receiver fails. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to