viirya commented on a change in pull request #31451:
URL: https://github.com/apache/spark/pull/31451#discussion_r605101273
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/metric/SQLMetricInfo.scala
##########
@@ -27,4 +27,5 @@ import org.apache.spark.annotation.DeveloperApi
class SQLMetricInfo(
val name: String,
val accumulatorId: Long,
- val metricType: String)
+ val metricType: String,
+ val aggregateMethod: (Array[Long], Array[Long]) => String)
Review comment:
For now the only reason to have default there, is to avoid changing test
code. This doesn't look like strong reason. For personal preference, I can
understand you may prefer to set default there. However, for code structure I
don't think a `SQLMetricInfo` or `SQLPlanMetric` should know how to aggregate
metric by default. If we actually need these classes to initiate metric
aggregation in the future, we may change them. I think we should not adding
default values just because of an unknown future usage.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]