LuciferYang commented on pull request #33556:
URL: https://github.com/apache/spark/pull/33556#issuecomment-890880119


   > This's a known issue. You're right. It doesn't make sense to truncate the 
file since we'll delete the file anyway. It's probably due to the historical 
reason. But I don't like the idea to add a parameter to control the truncate 
operation. I don't think there's a valid case that we want "truncate" indeed. 
Do we? I think this's actually a separate issue beyond what this PR was trying 
to fix at the beginning. But I'm OK if you'd like to refactor the 
revertPartialWritesAndClose and fix both issues together.
   
   Thx for your explanation @Ngone51, I will try to fix it in another pr, this 
is independent of the current pr
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to