cloud-fan commented on pull request #31476:
URL: https://github.com/apache/spark/pull/31476#issuecomment-785882143


   Yes, `SQLMetrics` is the one to define the aggregating logic. Previously 
it's decided by `metricsType`, now we can probably replace `metricsType` with a 
general aggregating function: `aggregateMethod: (Array[Long], Array[Long]) => 
String`. For builtin metrics it's `SQLMetrics.stringValue(metricsType, _, _)`. 
For v2 metrics we ignore the second parameter. Seems we don't need `V2_CUSTOM` 
:)
    
   It's the SQL UI component that collects the task metrics and aggregates 
them, so we should let the SQL UI component knows the custom aggregating logic. 
We can propagate it through `SQLMetricInfo`, `SQLPlanMetric`, etc.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to