[
https://issues.apache.org/jira/browse/HDFS-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Arun Suresh updated HDFS-7858:
------------------------------
Attachment: HDFS-7858.7.patch
Thanks [~arpitagarwal] and [~jingzhao] for your reviews.
Uploading patch addressing your suggestions.
w.r.t. Using CompletionService.
Yup.. thanks, it did make the implementation more readable.
bq. I didn't understand the call to super.performFailover in
RequestHedgingProxyProvider#getProxy.
Yeah.. i wanted to increment the proxy index. Agreed, it does look out of
place. Ive created an explicit method to make it more readable.
bq. For RequestHedgingProxyProvider#performFailover, if the original
successfulProxy is not null, we can exclude it for the next time retry.
So, in the case of the ReqHedgingProxy, {{performFailover}} will be called only
if ALL the proxies have failed (with retry/failover_and_retry.. ), in which
case, in the next attempt, the request will be again sent to all the namenodes,
so dont think it makes sense to exclude it.
bq. new LinkedList<RetryAction> - explicit type argument redundant.
Oh.. I was thinking we should keep trunk Java 7 compilable ?
> Improve HA Namenode Failover detection on the client
> ----------------------------------------------------
>
> Key: HDFS-7858
> URL: https://issues.apache.org/jira/browse/HDFS-7858
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Arun Suresh
> Assignee: Arun Suresh
> Labels: BB2015-05-TBR
> Attachments: HDFS-7858.1.patch, HDFS-7858.2.patch, HDFS-7858.2.patch,
> HDFS-7858.3.patch, HDFS-7858.4.patch, HDFS-7858.5.patch, HDFS-7858.6.patch,
> HDFS-7858.7.patch
>
>
> In an HA deployment, Clients are configured with the hostnames of both the
> Active and Standby Namenodes.Clients will first try one of the NNs
> (non-deterministically) and if its a standby NN, then it will respond to the
> client to retry the request on the other Namenode.
> If the client happens to talks to the Standby first, and the standby is
> undergoing some GC / is busy, then those clients might not get a response
> soon enough to try the other NN.
> Proposed Approach to solve this :
> 1) Since Zookeeper is already used as the failover controller, the clients
> could talk to ZK and find out which is the active namenode before contacting
> it.
> 2) Long-lived DFSClients would have a ZK watch configured which fires when
> there is a failover so they do not have to query ZK everytime to find out the
> active NN
> 2) Clients can also cache the last active NN in the user's home directory
> (~/.lastNN) so that short-lived clients can try that Namenode first before
> querying ZK
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)