advancedxy commented on a change in pull request #27100: 
[SPARK-29037][CORE][SQL] Fix the issue that spark gives duplicate result and 
support concurrent file source write operations write to different partitions 
in the same table.
URL: https://github.com/apache/spark/pull/27100#discussion_r364050676
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
 ##########
 @@ -269,4 +323,214 @@ case class InsertIntoHadoopFsRelationCommand(
       }
     }.toMap
   }
+
+  /**
+   * Check current committer whether supports several 
InsertIntoHadoopFsRelation operations write
+   * to different partitions in a same table concurrently. If supports, then 
detect the conflict
+   * whether there are several operations write to same partition in the same 
table or write to
+   * a non-partitioned table.
+   */
+  private def detectConflict(
 
 Review comment:
   Kind of feeling that this is more suitable inside of FileCommitProtocol. 
Let's wait for others opinion 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to