Masayoshi TSUZUKI created SPARK-4634:
----------------------------------------

             Summary: Enable metrics for each application to be gathered in one 
node
                 Key: SPARK-4634
                 URL: https://issues.apache.org/jira/browse/SPARK-4634
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 1.1.0
            Reporter: Masayoshi TSUZUKI


Metrics output is now like this:
{noformat}
  - app_1.driver.jvm.<somevalue>
  - app_1.driver.jvm.<somevalue>
  - ...
  - app_2.driver.jvm.<somevalue>
  - app_2.driver.jvm.<somevalue>
  - ...
{noformat}
In current spark, application names come to top level,
but we should be able to gather the application names under some top level node.

For example, think of using graphite.
When we use graphite, the application names are listed as top level node.
Graphite can also collect OS metrics, and OS metrics are able to be put in some 
one node.
But the current Spark metrics are not.
So, with the current Spark, the tree structure of metrics shown in graphite web 
UI is like this.
{noformat}
  - os
    - os.node1.<somevalue>
    - os.node2.<somevalue>
    - ...
  - app_1
    - app_1.driver.jvm.<somevalue>
    - app_1.driver.jvm.<somevalue>
    - ...
  - app_2
    - ...
  - app_3
    - ...
{noformat}
We should be able to add some top level name before the application name (top 
level name may be cluster name for instance).

If we make the name configurable by *.conf, it might be also convenience in 
case that 2 different spark clusters sink metrics to the same graphite server.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to