[
https://issues.apache.org/jira/browse/HDFS-9179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15189523#comment-15189523
]
Allen Wittenauer commented on HDFS-9179:
----------------------------------------
bq. The justification was that the NN service URI configuration is more
complicated than makes sense. First we look for a servicerpc-address. Failing
at that, we look for an rpc-address. Failing at that we fall back to the
defaultFS. And things just get worse with HA.
I think the vast, vast, vast majority of end users would actually disagree.
bq. I grant that those are fairly weak reasons to do something that's going to
cause lots of things to break, but it would make the configuration logic
cleaner.
Why should people do this work when computers have been doing it fine for quite
a while now? "cleaner logic" for actual labor is a terrible trade off.
Although, I'd *love* to be in the room when this exchange happens:
U: "You mean, I have to set all of these server configs and then set a bunch of
client configs that point to the exact same stuff? Why can't I just set one
and be done with it? I'm only running a single, simple cluster on AWS."
D: "No, you need to set multiple because otherwise our coding logic is too hard
to understand."
> fs.defaultFS should not be used on the server side
> --------------------------------------------------
>
> Key: HDFS-9179
> URL: https://issues.apache.org/jira/browse/HDFS-9179
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 2.7.1
> Reporter: Daniel Templeton
> Assignee: Daniel Templeton
>
> Currently the namenode will bind to the address given by defaultFS if no
> rpc-address is given. That behavior is an evolutionary artifact and should
> be removed. Instead, the rpc-address should be a required setting for the
> server side configuration.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)