[
https://issues.apache.org/jira/browse/HADOOP-16721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran updated HADOOP-16721:
------------------------------------
Description:
h3. race condition in delete/rename overlap
If you have multiple threads on a system doing rename operations, then one
thread doing a delete(dest/subdir) may delete the last file under a subdir,
and, before its listed and recreated any parent dir marker -other threads may
conclude there's an empty dest dir and fail.
This is most likely on an overloaded system with many threads executing rename
operations, as with parallel copying taking place there are many threads to
schedule and https connections to pool.
h3. failure reporting
the classic {[rename(source, dest)}} operation returns {{false}} on certain
failures, which, while somewhat consistent with the posix APIs, turns out to be
useless for identifying the cause of problems. Applications tend to have code
which goes
{code}
if (!fs.rename(src, dest)) throw new IOException("rename failed and we don't
know why");
{code}
This change modifies S3A FS To
# raise FileNotFoundException if the source is missing
# raise FileAlreadyExistsException if the destination isn't suitable for the
source (source is a dir and the dest is one of: a file, a non-empty directory)
It still returns false for "no-op renames" , e.g where source == dest.
Other stores raise the same exceptions, with this change S3A moves away from
consistency with HDFS to one where applications find out what is wrong.
was:
h3. race condition in delete/rename overlap
If you have multiple threads on a system doing rename operations, then one
thread doing a delete(dest/subdir) may delete the last file under a subdir,
and, before its listed and recreated any parent dir marker -other threads may
conclude there's an empty dest dir and fail.
This is most likely on an overloaded system with many threads executing rename
operations, as with parallel copying taking place there are many threads to
schedule and https connections to pool.
h3. failure reporting
the classic \{[rename(source, dest)}} operation returns \{{false}} on certain
failures, which, while somewhat consistent with the posix APIs, turns out to be
useless for identifying the cause of problems. Applications tend to have code
which goes
{code}
if (!fs.rename(src, dest)) throw new IOException("rename failed");
{code}
While ultimately the rename/3 call needs to be made public (HADOOP-11452) it
would then need a adoption across applications. We can do this in the hadoop
modules, but for Hive, Spark etc it will take along time.
Proposed: a switch to tell S3A to stop downgrading certain failures (source is
dir, dest is file, src==dest, etc) into "false". This can be turned on when
trying to diagnose why things like Hive are failing.
Production code: trivial
* change in rename(),
* new option
* docs.
Test code:
* need to clear this option for rename contract tests
* need to create a new FS with this set to verify the various failure modes
trigger it.
If this works we should do the same for ABFS, GCS. Hey, maybe even HDFS
> Improve S3A rename resilience
> -----------------------------
>
> Key: HADOOP-16721
> URL: https://issues.apache.org/jira/browse/HADOOP-16721
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.2.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Blocker
> Labels: pull-request-available
> Time Spent: 3h
> Remaining Estimate: 0h
>
> h3. race condition in delete/rename overlap
> If you have multiple threads on a system doing rename operations, then one
> thread doing a delete(dest/subdir) may delete the last file under a subdir,
> and, before its listed and recreated any parent dir marker -other threads may
> conclude there's an empty dest dir and fail.
> This is most likely on an overloaded system with many threads executing
> rename operations, as with parallel copying taking place there are many
> threads to schedule and https connections to pool.
> h3. failure reporting
> the classic {[rename(source, dest)}} operation returns {{false}} on certain
> failures, which, while somewhat consistent with the posix APIs, turns out to
> be useless for identifying the cause of problems. Applications tend to have
> code which goes
> {code}
> if (!fs.rename(src, dest)) throw new IOException("rename failed and we don't
> know why");
> {code}
> This change modifies S3A FS To
> # raise FileNotFoundException if the source is missing
> # raise FileAlreadyExistsException if the destination isn't suitable for the
> source (source is a dir and the dest is one of: a file, a non-empty directory)
> It still returns false for "no-op renames" , e.g where source == dest.
> Other stores raise the same exceptions, with this change S3A moves away from
> consistency with HDFS to one where applications find out what is wrong.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]