rezasafi commented on a change in pull request #23306: [SPARK-26357][Core]
Expose executors' procfs metrics to Metrics system
URL: https://github.com/apache/spark/pull/23306#discussion_r242359283
##########
File path:
core/src/main/scala/org/apache/spark/executor/ProcfsMetricsSource.scala
##########
@@ -19,14 +19,15 @@ package org.apache.spark.executor
import com.codahale.metrics.{Gauge, MetricRegistry}
+import org.apache.spark.SparkEnv
import org.apache.spark.internal.config
import org.apache.spark.metrics.source.Source
-import org.apache.spark.SparkEnv
private[executor] class ProcfsMetricsSource extends Source {
override val sourceName = "procfs"
- override val metricRegistry = new MetricRegistry()
+ // We use numMetrics for tracking to only call computAllMetrics once per set
of metrics
Review comment:
BTW, NettyMemoryMetrics is implementing MetricSet and each metrics there
also have implemented Metric interface. As I responded in my earlier comment,
If I go that route I will avoid this here, but then code in ProcfsMetricGetter
will be much uglier and to be honest I don't want to change that since it took
5 months for us to reach an agreement there. The gain also wouldn't be that
much. The purpose of this code here is to have a less impact on the performance
by removing unnecessary calls.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]