turboFei commented on a change in pull request #25863: 
[SPARK-28945][SPARK-29037][CORE][SQL] Fix the issue that spark gives duplicate 
result and support concurrent file source write operations write to different 
partitions in the same table.
URL: https://github.com/apache/spark/pull/25863#discussion_r328889484
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
 ##########
 @@ -263,4 +280,107 @@ case class InsertIntoHadoopFsRelationCommand(
       }
     }.toMap
   }
+
+  /**
+   * Detect the conflict when there are several InsertIntoHadoopFsRelation 
operations
+   * write to same partition in the same table or a non-partitioned table 
concurrently.
+   */
+  private def detectConflict(
+      fs: FileSystem,
+      path: Path,
+      staticPartitionKVs: Seq[(String, String)]): Unit = {
+    for (i <- 0 until partitionColumns.size) {
+      Some(".spark-staging-" + i)
+        .map(stagingPath => new Path(path, stagingPath))
+        .foreach { stagingDir =>
+          if (fs.exists(stagingDir)) {
 
 Review comment:
   In fact, there will not be a lot of partition columns in production env.
   And we usually need invoke `.exists` at most num of partition columns times 
for a query if no conflict was detected.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to