advancedxy commented on a change in pull request #25739: 
[WIP][SPARK-28945][CORE][SQL] Support concurrent dynamic partition writes to 
different partitions in the same table
URL: https://github.com/apache/spark/pull/25739#discussion_r326263173
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
 ##########
 @@ -91,7 +91,31 @@ class HadoopMapReduceCommitProtocol(
    */
   private def stagingDir = new Path(path, ".spark-staging-" + jobId)
 
+  /**
+   * Get the desired output path for the job. The output will be [[path]] when
 
 Review comment:
   the path is defined in the class parameter, and the comment for that is:
   
   ```
    * @param jobId the job's or stage's id
    * @param path the job's output path, or null if committer acts as a noop
    * @param dynamicPartitionOverwrite If true, Spark will overwrite partition 
directories at runtime
    *                                  dynamically, i.e., we first write files 
under a staging
    *                                  directory with partition path, e.g.
    *                                  /path/to/staging/a=1/b=1/xxx.parquet. 
When committing the job,
    *                                  we first clean up the corresponding 
partition directories at
    *                                  destination path, e.g. 
/path/to/destination/a=1/b=1, and move
    *                                  files from staging directory to the 
corresponding partition
    *                                  directories under destination path.
   
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to