GitHub user tdas opened a pull request:

    https://github.com/apache/spark/pull/2882

    [SPARK-4026][Streaming] synchronously write received data to HDFS and 
recover on driver failure

    As part of the effort to avoid data loss on Spark Streaming driver failure, 
we want to implement a write ahead log that can write received data to HDFS. 
This allows the received data to be persist across driver failures. So when the 
streaming driver is restarted, it can find and reprocess all the data that were 
received but not processed.
    
    This was primarily implemented by @harishreedharan. This is still WIP, as 
he is going to improve the unitests by using HDFS mini cluster.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/tdas/spark driver-ha-wal

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/2882.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2882
    
----
commit 172358de10a61f296e52fa347c2e40aa87490ecf
Author: Tathagata Das <[email protected]>
Date:   2014-10-21T02:52:55Z

    Pulled WriteAheadLog-related stuff from tdas/spark/tree/driver-ha-working

commit 5182ffb3053a143f221f1e56ed21e2461b4d9e4f
Author: Hari Shreedharan <[email protected]>
Date:   2014-10-21T19:59:38Z

    Added documentation

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to