Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17124#discussion_r103737250
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
 ---
    @@ -282,8 +282,12 @@ private[state] class HDFSBackedStateStoreProvider(
           // target file will break speculation, skipping the rename step is 
the only choice. It's still
           // semantically correct because Structured Streaming requires 
rerunning a batch should
           // generate the same output. (SPARK-19677)
    +      // Also, a tmp file of delta file that generated by the first batch 
after restart
    +      // streaming job is still reserved on HDFS. (SPARK-19779)
           // scalastyle:on
    -      if (!fs.exists(finalDeltaFile) && !fs.rename(tempDeltaFile, 
finalDeltaFile)) {
    +      if (fs.exists(finalDeltaFile)) {
    +        fs.delete(tempDeltaFile, true)
    +      } else if (!fs.rename(tempDeltaFile, finalDeltaFile)) {
    --- End diff --
    
    I guess my point is, after this change, the file may not exist after this 
executes. Before, it always existed after this block. I wasn't sure that was 
the intended behavior change because the purpose seems to be to delete the temp 
file.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to