[
https://issues.apache.org/jira/browse/HADOOP-16721?focusedWorklogId=561031&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-561031
]
ASF GitHub Bot logged work on HADOOP-16721:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 04/Mar/21 17:50
Start Date: 04/Mar/21 17:50
Worklog Time Spent: 10m
Work Description: steveloughran opened a new pull request #2742:
URL: https://github.com/apache/hadoop/pull/2742
s3a rename to support
fs.s3a.rename.raises.exceptions: raise exceptions on rename failures
fs.s3a.rename.reduced.probes: don't look for parent dir (LIST), just verify
it isn't a file.
The reduced probe not only saves money, it avoids race conditions
where one thread deleting a subdir can cause LIST <parent> to fail before
a dir marker is recreated.
Note:
* file:// rename() creates parent dirs, so this isn't too dangerous.
* tests will switch modes.
We could always just do the HEAD; topic for discussion. This patch: optional
Change-Id: Ic0f8a410b45fef14ff522cb5aa1ae2bc19c8eeee
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 561031)
Remaining Estimate: 0h
Time Spent: 10m
> Improve S3A rename resilience
> -----------------------------
>
> Key: HADOOP-16721
> URL: https://issues.apache.org/jira/browse/HADOOP-16721
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.2.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
> Time Spent: 10m
> Remaining Estimate: 0h
>
> Improve rename resilience in two ways.
> h3. parent dir probes
> allow an option skip the LIST for the parent and just do HEAD object to makes
> sure it is not a file.
> h3. failure reporting
> the classic \{[rename(source, dest)}} operation returns \{{false}} on certain
> failures, which, while somewhat consistent with the posix APIs, turns out to
> be useless for identifying the cause of problems. Applications tend to have
> code which goes
> {code}
> if (!fs.rename(src, dest)) throw new IOException("rename failed");
> {code}
> While ultimately the rename/3 call needs to be made public (HADOOP-11452) it
> would then need a adoption across applications. We can do this in the hadoop
> modules, but for Hive, Spark etc it will take along time.
> Proposed: a switch to tell S3A to stop downgrading certain failures (source
> is dir, dest is file, src==dest, etc) into "false". This can be turned on
> when trying to diagnose why things like Hive are failing.
> Production code: trivial
> * change in rename(),
> * new option
> * docs.
> Test code:
> * need to clear this option for rename contract tests
> * need to create a new FS with this set to verify the various failure modes
> trigger it.
>
> If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]