[
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
James Clampffer updated HDFS-10441:
-----------------------------------
Attachment: HDFS-10441.HDFS-8707.010.patch
New patch. Addresses all of the "must haves" from Bob's last review.
bq. status.h: is having both is_server_exception_ and exception_class_
redundant?
Yep, got rid of is_server_exception.
bq. hdfs_configuration.c: We have a (faster) split function in uri.cc; let's
refactor that into a Util method
I went and implemented this but was getting valgrind errors in
configuration_test and hdfs_configuration_test due to statically initialized
protobuf stuff even after calling the protobuf shutdown method. Going to push
this into another jira. Adding it to the current util.h/cc means tests have to
link against protobuf and openssl that don't really need to so I might try and
separate out util methods that don't need external libs.
bq. HdfsConfiguration::LookupNameService: if the URI parsing failed, we should
just ignore the URI as mal-formed, not bail out of the entire function. There
may be a well-formed URI in a later value.
Will lump this in with the above improvement in a different jira. Want to
check out how the java client handles that.
bq. HdfsConfiguration: I'm a little uncomfortable using the URI parser to break
apart host:port. If the user enters "foo:bar@baz", it will interpret that as a
password and silently drop everything before the baz. Just using split(':') and
converting the port to int if it exists is solid enough.
Lump this in as well since it's related.
bq. status.cc: I don't think the java exception name should go in the
(user-visible) output message. A string describing the error ("Invalid
Argument") would be nice, though.
I agree. In the short term I'd like to keep them around for debugging though.
bq. filesystem.cc: why do we call InitRpc before checking if there's an
io_service_?
This was a mistake, but I got rid of InitRpc so it's no longer an issue.
bq. rpc_engine.h: Are ha_persisted_info_ and ha_enabled_ redundant?
Pretty much, except for the initial check that nulls out the ha_persisted_info_
if parsing failed.
> libhdfs++: HA namenode support
> ------------------------------
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs-client
> Reporter: James Clampffer
> Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch,
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch,
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch,
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch,
> HDFS-10441.HDFS-8707.008.patch, HDFS-10441.HDFS-8707.009.patch,
> HDFS-10441.HDFS-8707.010.patch, HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]