[ 
https://issues.apache.org/jira/browse/SPARK-36406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Zsolt Piros resolved SPARK-36406.
----------------------------------------
    Fix Version/s: 3.3.0
       Resolution: Fixed

Issue resolved by pull request 33628
[https://github.com/apache/spark/pull/33628]

> No longer do file truncate operation before delete a write failed file held 
> by DiskBlockObjectWriter
> ----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-36406
>                 URL: https://issues.apache.org/jira/browse/SPARK-36406
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.3.0
>            Reporter: Yang Jie
>            Assignee: Yang Jie
>            Priority: Minor
>             Fix For: 3.3.0
>
>
> We always do a file truncate operation(by 
> DiskBlockObjectWriter.revertPartialWritesAndClose method) before delete a 
> write failed file held by DiskBlockObjectWriter, a typical process is as 
> follows
>  
> {code:java}
> if (!success) {
>   // This code path only happens if an exception was thrown above before we 
> set success;
>   // close our stuff and let the exception be thrown further
>   writer.revertPartialWritesAndClose()
>   if (file.exists()) {
>     if (!file.delete()) {
>       logWarning(s"Error deleting ${file}")
>     }
>   }
> }{code}
>  
> This truncate operation seems unnecessary, we can add a new method to avoid 
> do it.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to