Github user zsxwing commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21919#discussion_r208749439
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
 ---
    @@ -58,6 +61,7 @@ case class WriteToDataSourceV2Exec(writer: 
DataSourceWriter, query: SparkPlan) e
         val useCommitCoordinator = writer.useCommitCoordinator
         val rdd = query.execute()
         val messages = new Array[WriterCommitMessage](rdd.partitions.length)
    +    val totalNumRowsAccumulator = new LongAccumulator()
    --- End diff --
    
    You should call `SparkContext.longAccumulator` to create an accumulator. 
Why not use a SQLMetric? If so, it will show in the SQL UI.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to