[
https://issues.apache.org/jira/browse/HBASE-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12851052#action_12851052
]
Nicolas Spiegelberg commented on HBASE-2382:
--------------------------------------------
After talking with Dhuba offline, we think the best strategy is to keep the
replication responsibility on the client and add 'dfs.replication' to the hbase
config (or add a dfs-client config to hbase/conf) when needed. In the general
use case, multiple client programs could use the same HDFS cluster, and each
client will usually have different replication requirements. It's kinda naïve
for an HDFS client to not be aware of it's cluster setup. As a suggestion for
clarity, maybe rename fs to fsClient so it's clear that we're directly
communicating with our local client app and not the FS itself.
> Don't rely on fs.getDefaultReplication() to roll HLogs
> ------------------------------------------------------
>
> Key: HBASE-2382
> URL: https://issues.apache.org/jira/browse/HBASE-2382
> Project: Hadoop HBase
> Issue Type: Improvement
> Reporter: Jean-Daniel Cryans
> Assignee: Nicolas Spiegelberg
> Fix For: 0.20.4, 0.21.0
>
> Attachments: HBASE-2382-20.4.patch
>
>
> As I was commenting in HBASE-2234, using fs.getDefaultReplication() to roll
> HLogs if they lose replicas isn't reliable since that value is client-side
> and unless HBase is configured with it or has Hadoop's configurations on its
> classpath, it will do the wrong thing.
> Dhruba added:
> bq. Can we use <hlogpath>.getFiletatus().getReplication() instead of
> fs.getDefaltReplication()? This will will ensure that we look at the repl
> factor of the precise file we are interested in, rather than what the
> system-wide default value is.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.