GitHub user mikhaildubkov opened a pull request:
https://github.com/apache/spark/pull/12678
[SPARK-14908] [YARN] Provide support HDFS-located resources for sparkâ¦
The main goal behind these changes are provide support to use HDFS
resources for "spark.executor.extraClassPath", when Hadoop/YARN deployments
used.
This can be helpful when you want to use custom SparkSerializer
implementation (our project case).
How it works with these changes:
1. Value of "spark.executor.extraClassPath" splits by comma
2. Iterate over all paths and filter those which started with "hdfs;//"
3. Generate link for each path and add LocalResource to executor launch
context local resources
4. Add generated links to executor CLASSPATH
5. NodeManager loads the specified local resources to application cache
After that, you do not need deploy extra resources to each Hadoop node
manually, it will be automatically.
The changes fully backward compatible and does not break any existing
"spark.executor.extraClassPath" usages.
This patch was tested manually on our Hadoop cluster (4-nodes).
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/mikhaildubkov/spark master
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/12678.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #12678
----
commit a4f1c10a3f0f10b9f18ca61e599f50a1e17ba8bd
Author: Mikhail Dubkov <[email protected]>
Date: 2016-04-26T00:23:42Z
[SPARK-14908] [YARN] Provide support HDFS-located resources for
spark.executor.extraClasspath on YARN
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]