Github user attilapiros commented on a diff in the pull request:
https://github.com/apache/spark/pull/21635#discussion_r202035811
--- Diff: docs/monitoring.md ---
@@ -435,6 +435,7 @@ set of sinks to which metrics are reported. The
following instances are currentl
* `executor`: A Spark executor.
* `driver`: The Spark driver process (the process in which your
SparkContext is created).
* `shuffleService`: The Spark shuffle service.
+* `yarn`: Spark resource allocations on YARN.
--- End diff --
Sure we can do that. After this many change I would like to test it on a
cluster again.
Soon I will come back with the result.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]