snmvaughan commented on code in PR #46188:
URL: https://github.com/apache/spark/pull/46188#discussion_r1584036371


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/BasicWriteStatsTracker.scala:
##########
@@ -223,6 +278,9 @@ class BasicWriteJobStatsTracker(
 
     val executionId = 
sparkContext.getLocalProperty(SQLExecution.EXECUTION_ID_KEY)
     SQLMetrics.postDriverMetricUpdates(sparkContext, executionId, 
driverSideMetrics.values.toList)
+
+    SQLPartitionMetrics.postDriverMetricUpdates(sparkContext, executionId,

Review Comment:
   The existing collected metrics through `SQLMetrics` are limited.  `SQLMetic` 
extends `AccumulatorV2[Long, Long]` so I didn't see a clear way for the V1 code 
to use a more complex AccumulatorV2 of a map.  I wanted to start collecting the 
data, and look to improve the implementation over time.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to