[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15009201#comment-15009201
 ] 

Bob Hansen commented on HDFS-9117:
----------------------------------

{quote}
bq. We will also need a non-static method to load another batch of settings on 
top of currently loaded ones.

There is no need to support this. It's possible to make the configuration fully 
immutable.
{quote}

The current implementation is effectively immutable (each successive layer 
produces a new instance, and all of the mutation methods are private); we can 
make it completely immutable, but I think it will make the code messier.

My understanding is that the way Hadoop config files are used are that they are 
layered on each other; the defaults first, then the core-site.xml, then 
hdfs-site-xml, etc.  Will this configuration class not need to be able to 
represent this layered loading?

bq. Not much differences from a linker's prospective, huge differences from 
human's prospective as it requires two declarations for each type in the header 
file.

Given that we will not be supporting every type, do you think it's better to 
have a single line that shows support for all types, then fails at compile 
time, or two declarations for each type that explicitly shows what types are 
supported?  Having consumed APIs in the past, I know I would prefer the first.

bq. The configuration needs to support integer, boolean, string, timeout, 
bytes, URI, vector of int, and vector of string

Again, do we think that calling 
{code}
config.get<std::vector<std::string>>("foo") 
{code} 
is a nicer interface than calling 
{code}
config.GetStrings("foo")
{code}

If we use the templated version, how do we distinguish between getting a string 
vs. a URI, or an integer vs. bytes?  We could make up pretend type just for the 
purpose of disambiguating which code to call, but I think it's cleaner just 
codifying it in the method names.

I'm not dead-set against this, but as you pointed out, we're committed to the 
APIs we put in, and I'm always a fan of making APIs less ambiguous and more 
explicit if at all possible.

> Config file reader / options classes for libhdfs++
> --------------------------------------------------
>
>                 Key: HDFS-9117
>                 URL: https://issues.apache.org/jira/browse/HDFS-9117
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>    Affects Versions: HDFS-8707
>            Reporter: Bob Hansen
>            Assignee: Bob Hansen
>         Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch, HDFS-9117.HDFS-8707.005.patch, 
> HDFS-9117.HDFS-8707.006.patch, HDFS-9117.HDFS-8707.008.patch, 
> HDFS-9117.HDFS-8707.009.patch, HDFS-9117.HDFS-8707.010.patch, 
> HDFS-9117.HDFS-8707.011.patch, HDFS-9117.HDFS-8707.012.patch, 
> HDFS-9117.HDFS-8707.013.patch, HDFS-9117.HDFS-8707.014.patch, 
> HDFS-9117.HDFS-8707.015.patch, HDFS-9117.HDFS-9288.007.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to