[
https://issues.apache.org/jira/browse/SPARK-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Masayoshi TSUZUKI closed SPARK-4634.
------------------------------------
Resolution: Not a Problem
GraphiteSink has already got the option "prefix" and it works fine.
> Enable metrics for each application to be gathered in one node
> --------------------------------------------------------------
>
> Key: SPARK-4634
> URL: https://issues.apache.org/jira/browse/SPARK-4634
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 1.1.0
> Reporter: Masayoshi TSUZUKI
>
> Metrics output is now like this:
> {noformat}
> - app_1.driver.jvm.<somevalue>
> - app_1.driver.jvm.<somevalue>
> - ...
> - app_2.driver.jvm.<somevalue>
> - app_2.driver.jvm.<somevalue>
> - ...
> {noformat}
> In current spark, application names come to top level,
> but we should be able to gather the application names under some top level
> node.
> For example, think of using graphite.
> When we use graphite, the application names are listed as top level node.
> Graphite can also collect OS metrics, and OS metrics are able to be put in
> some one node.
> But the current Spark metrics are not.
> So, with the current Spark, the tree structure of metrics shown in graphite
> web UI is like this.
> {noformat}
> - os
> - os.node1.<somevalue>
> - os.node2.<somevalue>
> - ...
> - app_1
> - app_1.driver.jvm.<somevalue>
> - app_1.driver.jvm.<somevalue>
> - ...
> - app_2
> - ...
> - app_3
> - ...
> {noformat}
> We should be able to add some top level name before the application name (top
> level name may be cluster name for instance).
> If we make the name configurable by *.conf, it might be also convenience in
> case that 2 different spark clusters sink metrics to the same graphite server.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]