Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/20490#discussion_r166381800
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
---
@@ -117,20 +118,43 @@ object DataWritingSparkTask extends Logging {
writeTask: DataWriterFactory[InternalRow],
context: TaskContext,
iter: Iterator[InternalRow]): WriterCommitMessage = {
- val dataWriter = writeTask.createDataWriter(context.partitionId(),
context.attemptNumber())
+ val stageId = context.stageId()
+ val partId = context.partitionId()
+ val attemptId = context.attemptNumber()
+ val dataWriter = writeTask.createDataWriter(partId, attemptId)
// write the data and commit this writer.
Utils.tryWithSafeFinallyAndFailureCallbacks(block = {
iter.foreach(dataWriter.write)
- logInfo(s"Writer for partition ${context.partitionId()} is
committing.")
- val msg = dataWriter.commit()
- logInfo(s"Writer for partition ${context.partitionId()} committed.")
+
+ val msg = if (writeTask.useCommitCoordinator) {
+ val coordinator = SparkEnv.get.outputCommitCoordinator
--- End diff --
What do you have in mind to "introduce the concept"?
I'm happy to add more docs. Do you want me to add them to this PR or in a
follow-up? Are you targeting this for 2.3.0?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]