HeartSaVioR commented on a change in pull request #26109: [SPARK-29461][SQL]
Measure the number of records being updated for JDBC writer
URL: https://github.com/apache/spark/pull/26109#discussion_r337864313
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -615,7 +615,7 @@ object JdbcUtils extends Logging {
batchSize: Int,
dialect: JdbcDialect,
isolationLevel: Int,
- options: JDBCOptions): Iterator[Byte] = {
+ options: JDBCOptions): Long = {
Review comment:
Yeah but looks like SparkHadoopWriter just updates the metric for any output
being written - maybe that's because of nonexistence of transaction. If we take
transaction into account, it would make sense to only update metric when the
transaction is committed, but we *might* also want to update metric when both
`committed` and `supportsTransactions` are false to reflect metric for dirty
outputs. WDYT?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]