Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/10699#discussion_r49344162
--- Diff: docs/running-on-yarn.md ---
@@ -260,10 +260,10 @@ If you need a reference to the proper location to put
log files in the YARN so t
<td>(none)</td>
<td>
A comma-separated list of secure HDFS namenodes your Spark application
is going to access. For
- example,
<code>spark.yarn.access.namenodes=hdfs://nn1.com:8032,hdfs://nn2.com:8032</code>.
- The Spark application must have access to the namenodes listed and
Kerberos must
- be properly configured to be able to access them (either in the same
realm or in
- a trusted realm). Spark acquires security tokens for each of the
namenodes so that
+ example,
<code>spark.yarn.access.namenodes=hdfs://nn1.com:8032,hdfs://nn2.com:8032,
+ webhdfs://nn3.com:50070</code>. The Spark application must have access
to the namenodes listed
--- End diff --
Its fine to mix schemas it really just depends on what your access pattern
is and where the cluster is located (regionally). Generally we recommend
webhdfs for reading only and only use it when needed (cross colo or
incompatibility in versions). If your spark application is reading or writing
from 2 different clusters its fine to mix, one might be in the same colo and
another cross colo.
I was just hoping to point out that it can be used with webhdfs also and
leave it up to the user to know what webhdfs is and when its used based on
their deployments. If you think it would be clearer to break out to a sentence
I'm fine with that
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]