[ 
https://issues.apache.org/jira/browse/HDDS-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-3046:
---------------------------------
    Fix Version/s: 0.5.0
       Resolution: Fixed
           Status: Resolved  (was: Patch Available)

> Fix Retry handling in ozone RPC Client
> --------------------------------------
>
>                 Key: HDDS-3046
>                 URL: https://issues.apache.org/jira/browse/HDDS-3046
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Bharat Viswanadham
>            Assignee: Bharat Viswanadham
>            Priority: Major
>              Labels: OMHA, OMHATest, pull-request-available
>             Fix For: 0.5.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Right now for all other exceptions other than serviceException we use 
> FailOverOnNetworkException.
> This Exception policy is created with 15 max fail overs and 15 retries. 
>  
> {code:java}
> retryPolicyOnNetworkException.shouldRetry(
>  exception, retries, failovers, isIdempotentOrAtMostOnce);{code}
> *2 issues with this:*
>  # When shouldRetry returns action FAILOVER_AND_RETRY, it will stuck with 
> same OM, and does not perform failover to next OM.  As 
> OMFailoverProxyProvider#performFailover() is a dummy call does not perform 
> any failover.
>  # When ozone.client.failover.max.attempts is set to 15, now with 2 policies 
> with each set to 15, we will retry 15*2 times in worst scenario. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org

Reply via email to