[ 
https://issues.apache.org/jira/browse/SPARK-20251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962318#comment-15962318
 ] 

Nan Zhu edited comment on SPARK-20251 at 4/10/17 12:16 AM:
-----------------------------------------------------------

more details here, it is expected that the compute() method for the next batch 
was executed before the app is shutdown, however, the app should be eventually 
shutdown since we have signalled the awaiting condition set in 
awaitTermination()....

however, this "eventual shutdown" was not happened...(this issue did not 
consistently happen)


was (Author: codingcat):
more details here, by "be proceeding", I mean it is expected that the compute() 
method for the next batch was executed before the app is shutdown, however, the 
app should be eventually shutdown since we have signalled the awaiting 
condition set in awaitTermination()....

however, this "eventual shutdown" was not happened...(this issue did not 
consistently happen)

> Spark streaming skips batches in a case of failure
> --------------------------------------------------
>
>                 Key: SPARK-20251
>                 URL: https://issues.apache.org/jira/browse/SPARK-20251
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.0
>            Reporter: Roman Studenikin
>
> We are experiencing strange behaviour of spark streaming application. 
> Sometimes it just skips batch in a case of job failure and starts working on 
> the next one.
> We expect it to attempt to reprocess batch, but not to skip it. Is it a bug 
> or we are missing any important configuration params?
> Screenshots from spark UI:
> http://pasteboard.co/1oRW0GDUX.png
> http://pasteboard.co/1oSjdFpbc.png



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to