Steve Loughran created HADOOP-16721:
---------------------------------------

             Summary: Add fs.s3a.rename.raises.exceptions to raise exceptions 
on rename failures
                 Key: HADOOP-16721
                 URL: https://issues.apache.org/jira/browse/HADOOP-16721
             Project: Hadoop Common
          Issue Type: Sub-task
          Components: fs/s3
    Affects Versions: 3.2.0
            Reporter: Steve Loughran


the classic \{[rename(source, dest)}} operation returns \{{false}} on certain 
failures, which, while somewhat consistent with the posix APIs, turns out to be 
useless for identifying the cause of problems. Applications tend to have code 
which goes

{code}
if (!fs.rename(src, dest)) throw new IOException("rename failed");
{code}

While ultimately the rename/3 call needs to be made public (HADOOP-11452) it 
would then need a adoption across applications. We can do this in the hadoop 
modules, but for Hive, Spark etc it will take along time.

Proposed: a switch to tell S3A to stop downgrading certain failures (source is 
dir, dest is file, src==dest, etc) into "false". This can be turned on when 
trying to diagnose why things like Hive are failing.

Production code: trivial 
* change in rename(), 
* new option
* docs.

Test code: 
* need to clear this option for rename contract tests
* need to create a new FS with this set to verify the various failure modes 
trigger it.

 

If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to