Thanks @chamikaramj 

> According to FileSystem.rename() we should be catching 
> java.io.FileNotFoundException instead of 
> java.nio.file.FileAlreadyExistsException.

I'm afraid this PR is all about the destination paths 
(FileAlreadyExistsException) and nothing to do with the source 
(FileNotFoundException).

FileSystem.rename() was in inadequately documented IMO as it only took into 
account files no longer existing in the sources (e.g. to accommodate 
failure/retry) and overlooked that implementations may also throw when the 
targets exist too. To my knowledge HDFS is the only one that does (see my 
recent commit on it where it now throws instead of swallowing and leaving 
corrupt data). 

I designed this so that any FS that threw a FileAlreadyExistsException could be 
handled and the caller could opt to delete and retry or not.

Given the confusion my implementation has given, I now believe it sensible to 
close this PR and I make HDFSFileSystem handle this internally. It will then 
always overwrite existing files (like other FS do). I didn't want to do that 
originally because it is not intuitive - Hadoop devs are used to jobs failing 
when targets exist. The reality is though, that we need force that anyway for 
the FileBasedSink.

@iemejia @chamikaramj  - would you be ok with that approach please? Thank you 
both for taking the time to be involved in this, and I am sorry that I have not 
been able to provide an intuitive solution - the questions it's raised convince 
me that it is a confusing patch so shouldn't be merged. 

[ Full content available at: https://github.com/apache/beam/pull/6289 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to