[
https://issues.apache.org/jira/browse/HDFS-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015661#comment-17015661
]
Ayush Saxena commented on HDFS-15112:
-------------------------------------
bq. So this change breaks the assumption that we can create files when a
subcluster is down.
The assumption that we won't fail if a cluster is unavailable is only when the
mount entry is fault tolerant, otherwise it should fail only?
If we catch and ignore here :
{code:java}
+ } catch (IOException ioe) {
+ if (RouterRpcClient.isUnavailableException(ioe)) {
+ LOG.debug("Ignore unavailable exception: {}", ioe);
+ } else {
+ throw ioe;
+ }
{code}
We will be ignoring this exception in case the mount entry is not fault
tolerant too.
> RBF: Do not return FileNotFoundException when a subcluster is unavailable
> --------------------------------------------------------------------------
>
> Key: HDFS-15112
> URL: https://issues.apache.org/jira/browse/HDFS-15112
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Íñigo Goiri
> Assignee: Íñigo Goiri
> Priority: Major
> Attachments: HDFS-15112.000.patch, HDFS-15112.001.patch,
> HDFS-15112.002.patch, HDFS-15112.004.patch, HDFS-15112.005.patch,
> HDFS-15112.006.patch, HDFS-15112.007.patch, HDFS-15112.008.patch,
> HDFS-15112.patch
>
>
> If we have a mount point using HASH_ALL across two subclusters and one of
> them is down, we may return FileNotFoundException while the file is just in
> the unavailable subcluster.
> We should not return FileNotFoundException but something that shows that the
> subcluster is unavailable.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]