[ https://issues.apache.org/jira/browse/HDFS-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17017401#comment-17017401 ]
Íñigo Goiri commented on HDFS-15112: ------------------------------------ Correct, the change in RouterRPCServer is only there to make the current testWriteWithFailedSubcluster() work. Let's open a new JIRA to change testWriteWithFailedSubcluster() and make it always fail for writes if one subcluster is down. > RBF: Do not return FileNotFoundException when a subcluster is unavailable > -------------------------------------------------------------------------- > > Key: HDFS-15112 > URL: https://issues.apache.org/jira/browse/HDFS-15112 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Íñigo Goiri > Assignee: Íñigo Goiri > Priority: Major > Attachments: HDFS-15112.000.patch, HDFS-15112.001.patch, > HDFS-15112.002.patch, HDFS-15112.004.patch, HDFS-15112.005.patch, > HDFS-15112.006.patch, HDFS-15112.007.patch, HDFS-15112.008.patch, > HDFS-15112.009.patch, HDFS-15112.patch > > > If we have a mount point using HASH_ALL across two subclusters and one of > them is down, we may return FileNotFoundException while the file is just in > the unavailable subcluster. > We should not return FileNotFoundException but something that shows that the > subcluster is unavailable. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org