Github user LucaCanali commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22218#discussion_r214549896
  
    --- Diff: 
core/src/main/scala/org/apache/spark/executor/ExecutorSource.scala ---
    @@ -73,6 +76,28 @@ class ExecutorSource(threadPool: ThreadPoolExecutor, 
executorId: String) extends
         registerFileSystemStat(scheme, "write_ops", _.getWriteOps(), 0)
       }
     
    +  // Dropwizard metrics gauge measuring the executor's process CPU time.
    +  // This Gauge will try to get and return the JVM Process CPU time or 
return -1 otherwise.
    +  // The CPU time value is returned in nanoseconds.
    +  // It will use proprietary extensions such as 
com.sun.management.OperatingSystemMXBean or
    +  // com.ibm.lang.management.OperatingSystemMXBean, if available.
    +  metricRegistry.register(MetricRegistry.name("jvmCpuTime"), new 
Gauge[Long] {
    --- End diff --
    
    Indeed, this is exposed only through dropwizard metrics system and not used 
otherwise in the Spark code. Another point worth mentioning is that currently 
executorSource is not registered when running in local mode.
    On a related topic (although maybe for a more general discussion than the 
scope of this PR) I was wondering if it would make sense to introduce a few 
SparkConf properties to switch on/off certain families of (dropwizard) metrics 
in the Spark, as the list of available metrics is mecoming long in recent 
versions.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to