[
https://issues.apache.org/jira/browse/HADOOP-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12646607#action_12646607
]
Doug Cutting commented on HADOOP-4631:
--------------------------------------
All default files should be disjoint, but must also be loaded before hadoop
Here's a proposal:
. add a static 'defaultResources' field to Configuration, whose default value
is ["core-default.xml"];
. add a new static method, Configuration#addDefaultResource();
. in JobConf, add 'static {
Configuration.addDefaultResource("mapred-default,xml"); }'
. in DistributedFileSystem.java, NameNode.java and DataNode.java, add 'static
{ Configuration.addDefaultResource("hdfs-default,xml"); }' We might factor
this to a common base class or somesuch.
When you first call FileSystem.get("hdfs:///", conf), your configuration won't
yet have hdfs-specific default values, but, in the course of this call, they
will be loaded, before any hdfs code references them. I think this is fine,
since application code should not be directly referencing hdfs-specific values.
If it needs to set them, it should do so through accessor methods, and these
accessor methods should dereference an HDFS class and force the loading of the
HDFS defaults. Does that make sense?
> Split the default configurations into 3 parts
> ---------------------------------------------
>
> Key: HADOOP-4631
> URL: https://issues.apache.org/jira/browse/HADOOP-4631
> Project: Hadoop Core
> Issue Type: Improvement
> Reporter: Owen O'Malley
> Fix For: 0.20.0
>
>
> We need to split hadoop-default.xml into core-default.xml, hdfs-default.xml
> and mapreduce-default.xml. That will enable us to split the project into 3
> parts that have the defaults distributed with each component.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.