Github user sardetushar commented on the issue:

    https://github.com/apache/spark/pull/5008
  
    Hi, I manage to solve this issue. now i am able to recover all the data but 
as discussed in this thread [Data Duplicate issue] 
(https://www.mail-archive.com/user@spark.apache.org/msg52687.html)
    how do i avoid duplicate Messages With Spark Streaming Using Checkpoint 
After Restart In Case Of Failure
    Ex. if i have published 500 records and spark processed 300 and after that 
i killed the driver and restart then  in the output directory i see the count 
of 800 messages being processed. this means spark is processing same (300) 
records again.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to