cloud-fan commented on a change in pull request #33002:
URL: https://github.com/apache/spark/pull/33002#discussion_r657619489
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSink.scala
##########
@@ -147,9 +147,13 @@ class FileStreamSink(
if (batchId <= fileLog.getLatestBatchId().getOrElse(-1L)) {
logInfo(s"Skipping already committed batch $batchId")
} else {
+ // To avoid file name collision, we should generate a new job ID for
every write job, instead
+ // of using batchId, as we may use the same batchId to write files
again, if the streaming job
+ // fails and we restore from the checkpoint.
+ val jobId = java.util.UUID.randomUUID().toString
Review comment:
This is the job id for spark file commit protocol. In Hadoop `JobId`, we
do append the timestamp info:
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala#L266
But that's a different story.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]