[ https://issues.apache.org/jira/browse/HDFS-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594062#comment-16594062 ]
Íñigo Goiri commented on HDFS-13857: ------------------------------------ Thanks [~hfyang20071] for the patch. I'm not sure the implementation in [^HDFS-13857.001.patch] is onyl for writes as it will also disallow reads. I'm fine with that but we need to rename everything accordingly. Some other comments: * We could throw the exception in the lookup itself and then just add if (!defaultNSWriteEnable) throw new IOException("Cannot find locations for " + path); * The message could be a little more specific in this case and say we don't support this because the default is not enabled. * We should have the false unit test too. * The change with FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE should be avoided to reduce churn. > RBF: Choose to enable the default nameservice to write files. > ------------------------------------------------------------- > > Key: HDFS-13857 > URL: https://issues.apache.org/jira/browse/HDFS-13857 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation, hdfs > Affects Versions: 3.0.0, 3.1.0, 2.9.1 > Reporter: yanghuafeng > Assignee: yanghuafeng > Priority: Major > Attachments: HDFS-13857.001.patch > > > The default nameservice can provide some default properties for the namenode > protocol. And if we cannot find the path, we will get a location in default > nameservice. From my side as cluster administrator, we need all files to be > written in the location from the MountTableEntry. If no responding location, > some error will return. It is not better to happen some files are written in > some unknown location. We should provide a specific parameter to enable the > default nameservice to store files. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org