rezasafi commented on a change in pull request #23306: [SPARK-26357][Core]
Expose executors' procfs metrics to Metrics system
URL: https://github.com/apache/spark/pull/23306#discussion_r242280461
##########
File path:
core/src/main/scala/org/apache/spark/executor/ProcfsMetricsSource.scala
##########
@@ -19,14 +19,15 @@ package org.apache.spark.executor
import com.codahale.metrics.{Gauge, MetricRegistry}
+import org.apache.spark.SparkEnv
import org.apache.spark.internal.config
import org.apache.spark.metrics.source.Source
-import org.apache.spark.SparkEnv
private[executor] class ProcfsMetricsSource extends Source {
override val sourceName = "procfs"
- override val metricRegistry = new MetricRegistry()
+ // We use numMetrics for tracking to only call computAllMetrics once per set
of metrics
Review comment:
I thought that this still can save us from unnecessary calls. So I kept it.
Why you think it is hacky? The way that Metrics system is designed is that it
just return a single value from a guage. There are some other methods to return
a set of metrics, but to use that we need to make more changes to the
procfsgetter to implement Dropwised metric interface for each metric that we
are going to report. I think that isn't necessary and it make the code uglier.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]