[
https://issues.apache.org/jira/browse/HADOOP-16721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17296038#comment-17296038
]
Steve Loughran commented on HADOOP-16721:
-----------------------------------------
PR #1 had probe policy optional between LIST dir and HEAD object; having looked
at a trace of a failure more closely, we have to stop doing the LIST calls as
hive deleting subdirs in separate threads can break renames in other threads.
Instead:
* HEAD object to guarantee no rename under file
* contract XML changed appropriately
* test rename under file subdir skipped
> Improve S3A rename resilience
> -----------------------------
>
> Key: HADOOP-16721
> URL: https://issues.apache.org/jira/browse/HADOOP-16721
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.2.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Blocker
> Labels: pull-request-available
> Time Spent: 20m
> Remaining Estimate: 0h
>
> h3. race condition in delete/rename overlap
> If you have multiple threads on a system doing rename operations, then one
> thread doing a delete(dest/subdir) may delete the last file under a subdir,
> and, before its listed and recreated any parent dir marker -other threads may
> conclude there's an empty dest dir and fail.
> This is most likely on an overloaded system with many threads executing
> rename operations, as with parallel copying taking place there are many
> threads to schedule and https connections to pool.
> h3. failure reporting
> the classic \{[rename(source, dest)}} operation returns \{{false}} on certain
> failures, which, while somewhat consistent with the posix APIs, turns out to
> be useless for identifying the cause of problems. Applications tend to have
> code which goes
> {code}
> if (!fs.rename(src, dest)) throw new IOException("rename failed");
> {code}
> While ultimately the rename/3 call needs to be made public (HADOOP-11452) it
> would then need a adoption across applications. We can do this in the hadoop
> modules, but for Hive, Spark etc it will take along time.
> Proposed: a switch to tell S3A to stop downgrading certain failures (source
> is dir, dest is file, src==dest, etc) into "false". This can be turned on
> when trying to diagnose why things like Hive are failing.
> Production code: trivial
> * change in rename(),
> * new option
> * docs.
> Test code:
> * need to clear this option for rename contract tests
> * need to create a new FS with this set to verify the various failure modes
> trigger it.
>
> If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]