GitHub user Myasuka opened a pull request:
https://github.com/apache/spark/pull/18452
Support WAL recover in windows
## What changes were proposed in this pull request?
When driver failed over, it will read WAL from HDFS by calling
WriteAheadLogBackedBlockRDD.getBlockFromWriteAheadLog(), however, it need a
dummy local path to satisfy the method parameter requirements, but the path in
windows will contain a colon which is not valid for hadoop. I removed the
potential driver letter and colon.
I found one email from spark-user ever talked about [this
bug](https://www.mail-archive.com/[email protected]/msg55030.html)
## How was this patch tested?
Without this fix, once driver failed over on YARN, WAL recovery would not
take effect. But the WAL recovery mechanism would take effect once patched this
fix of this PR on windows YARN cluster.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/Myasuka/spark patch-1
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/18452.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #18452
----
commit 5b48e534928a1dc4d126aa2e46fb8d979a471b98
Author: Yun Tang <[email protected]>
Date: 2017-06-28T09:17:44Z
Support WAL recover in windows
When driver failed over, it will read WAL from HDFS by calling
WriteAheadLogBackedBlockRDD.getBlockFromWriteAheadLog(), however, it need
a dummy local path to satisfy the method parameter requirements, but the
path in windows will contain a colon which is not valid for hadoop.
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]