Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/16432#discussion_r94972291
--- Diff: docs/running-on-yarn.md ---
@@ -479,12 +479,12 @@ Hadoop services issue *hadoop tokens* to grant access
to the services and data.
Clients must first acquire tokens for the services they will access and
pass them along with their
application as it is launched in the YARN cluster.
-For a Spark application to interact with HDFS, HBase and Hive, it must
acquire the relevant tokens
+For a Spark application to interact with Hadoop filesystem, HBase and
Hive, it must acquire the relevant tokens
using the Kerberos credentials of the user launching the application
âthat is, the principal whose identity will become that of the launched
Spark application.
This is normally done at launch time: in a secure cluster Spark will
automatically obtain a
-token for the cluster's HDFS filesystem, and potentially for HBase and
Hive.
+token for the cluster's Hadoop filesystem, and potentially for HBase and
Hive.
--- End diff --
Perhaps say the default hadoop filesystem
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]