Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/11033#discussion_r57308003
--- Diff: docs/running-on-yarn.md ---
@@ -452,3 +452,104 @@ If you need a reference to the proper location to put
log files in the YARN so t
- In `cluster` mode, the local directories used by the Spark executors and
the Spark driver will be the local directories configured for YARN (Hadoop YARN
config `yarn.nodemanager.local-dirs`). If the user specifies `spark.local.dir`,
it will be ignored. In `client` mode, the Spark executors will use the local
directories configured for YARN while the Spark driver will use those defined
in `spark.local.dir`. This is because the Spark driver does not run on the YARN
cluster in `client` mode, only the Spark executors do.
- The `--files` and `--archives` options support specifying file names
with the # similar to Hadoop. For example you can specify: `--files
localtest.txt#appSees.txt` and this will upload the file you have locally named
`localtest.txt` into HDFS but this will be linked to by the name `appSees.txt`,
and your application should use the name as `appSees.txt` to reference it when
running on YARN.
- The `--jars` option allows the `SparkContext.addJar` function to work if
you are using it with local files and running in `cluster` mode. It does not
need to be used if you are using it with HDFS, HTTP, HTTPS, or FTP files.
+
+# Running in a Secure Cluster
+
+As covered in [security](security.html), Kerberos is used in a secure
Hadoop cluster to
+authenticate principals associated with services and clients. This allows
clients to
+make requests of these authenticated services; the services to grant rights
+to the authenticated principals.
+
+Hadoop services issue *hadoop tokens* to grant access to the services and
data,
+tokens which the client must supply over Hadoop IPC and REST/Web APIs as
proof of access rights.
+For Spark applications launched in a YARN cluster to interact with HDFS,
HBase and Hive,
+the application must acquire the relevant tokens
+using the Kerberos credentials of the user launching the application
âthat is, the principal whose
+identity will become that of the launched Spark application.
+
+This is normally done at launch time: in a secure cluster Spark will
automatically obtain a
+token for the cluster's HDFS filesystem, and potentially for HBase and
Hive.
+
+An HBase token will be obtained if HBase is in on classpath, the HBase
configuration declares
+the application is secure (i.e. `hbase.security.authentication==kerberos`),
+and `spark.yarn.security.tokens.hbase.enabled` is not set to `false`.
+
+Similarly, a Hive token will be obtained if Hive is on the classpath, its
configuration
+includes a URI of the metadata store in `"hive.metastore.uris`, and
+`spark.yarn.security.tokens.hive.enabled` is not set to `false`.
+
+If an application needs to interact with other secure HDFS clusters, then
+the tokens needed to access these clusters must be explicitly requested at
+launch time. This is done by listing them in the
`spark.yarn.access.namenodes` property.
+
+```
+spark.yarn.access.namenodes
hdfs://ireland.example.org:8020/,hdfs://frankfurt.example.org:8020/
+```
+
+Hadoop tokens expire. They can be renewed "for a while".
--- End diff --
...I'll leave out the renew part.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]