Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/7463#discussion_r35219156
--- Diff: docs/running-on-yarn.md ---
@@ -68,9 +68,9 @@ In YARN terminology, executors and application masters
run inside "containers".
yarn logs -applicationId <app ID>
-will print out the contents of all log files from all containers from the
given application. You can also view the container log files directly in HDFS
using the HDFS shell or API. The directory where they are located can be found
by looking at your YARN configs (`yarn.nodemanager.remote-app-log-dir` and
`yarn.nodemanager.remote-app-log-dir-suffix`).
+will print out the contents of all log files from all containers from the
given application. You can also view the container log files directly in HDFS
using the HDFS shell or API. The directory where they are located can be found
by looking at your YARN configs (`yarn.nodemanager.remote-app-log-dir` and
`yarn.nodemanager.remote-app-log-dir-suffix`). The logs are also available on
the Spark Web UI under the Executors Tab. You need have both the Spark history
server and the MapReduce history server running and configure
`yarn.log.server.url` in `yarn-site.xml` properly. The log URL on the Spark
history server UI will redirect you to the MapReduce history server to show the
aggregated logs.
--- End diff --
can you change "You need have" to "You need to" have"
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]