GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/18935
[SPARK-9104][CORE] Expose Netty memory metrics in Spark
## What changes were proposed in this pull request?
This PR exposes Netty memory usage for Spark's `TransportClientFactory` and
`TransportServer`, including the details of each direct arena and heap arena
metrics, as well as aggregated metrics. The purpose of adding the Netty metrics
is to better know the memory usage of Netty in Spark shuffle, rpc and others
network communications, and guide us to better configure the memory size of
executors.
This PR doesn't expose these metrics to any sink, to leverage this feature,
still requires to connect to either MetricsSystem or collect them back to
Driver to display.
## How was this patch tested?
Add Unit test to verify it, also manually verified in real cluster.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/jerryshao/apache-spark SPARK-9104
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/18935.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #18935
----
commit 05c1f4de4f00639d5f1acf1b9c061e4894d8286d
Author: jerryshao <[email protected]>
Date: 2017-08-14T07:16:04Z
Expose Netty memory metrics in Spark
Change-Id: I006c464674180961b12a5a86b208f113b193ff0d
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]