Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/20490#discussion_r166174605
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
---
@@ -117,20 +118,43 @@ object DataWritingSparkTask extends Logging {
writeTask: DataWriterFactory[InternalRow],
context: TaskContext,
iter: Iterator[InternalRow]): WriterCommitMessage = {
- val dataWriter = writeTask.createDataWriter(context.partitionId(),
context.attemptNumber())
+ val stageId = context.stageId()
+ val partId = context.partitionId()
+ val attemptId = context.attemptNumber()
+ val dataWriter = writeTask.createDataWriter(partId, attemptId)
// write the data and commit this writer.
Utils.tryWithSafeFinallyAndFailureCallbacks(block = {
iter.foreach(dataWriter.write)
- logInfo(s"Writer for partition ${context.partitionId()} is
committing.")
- val msg = dataWriter.commit()
- logInfo(s"Writer for partition ${context.partitionId()} committed.")
+
+ val msg = if (writeTask.useCommitCoordinator) {
+ val coordinator = SparkEnv.get.outputCommitCoordinator
--- End diff --
I'm not sure why we need this. In the implementation of
`DataWriter.commit`, users can still call `SparkEnv.get.
outputCommitCoordinator`. User can even use their own commit coordinator which
is based on zookeeper or something.
I think the current API is flexible enough to: 1) not use commit
coordinator 2) use Spark built-in commit coordinator 3) use customer commit
coordinator.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]