[ 
https://issues.apache.org/jira/browse/HDDS-959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16737009#comment-16737009
 ] 

Lokesh Jain commented on HDDS-959:
----------------------------------

[~shashikant] Thanks for reviewing the patch! v3 patch addresses your comments.
|In case of preallocation of blocks, there is a possibility that multiple 
blocks write on the same pipeline using the same XceiverClientSpi object.|

The problem is that currently the xceiverClient is initialized during 
preallocation itself. v3 patch changes logic to initialize client only during 
data write.
|we can maintain a static array/list of Exception Classes and use it |

 v3 patch introduces new function KeyOutputStream#checkForRetryFailure for 
checking the specific exceptions.

> KeyOutputStream should handle retry failures
> --------------------------------------------
>
>                 Key: HDDS-959
>                 URL: https://issues.apache.org/jira/browse/HDDS-959
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>            Reporter: Lokesh Jain
>            Assignee: Lokesh Jain
>            Priority: Major
>         Attachments: HDDS-959.001.patch, HDDS-959.002.patch, 
> HDDS-959.003.patch
>
>
> With ratis version updated to 0.4.0-a8c4ca0-SNAPSHOT, retry failures are 
> fatal for a raft client. If an operation in raft client does not succeed 
> after maximum number of retries(RaftRetryFailureException) all subsequent 
> operations are failed with AlreadyClosedException. This jira aims to handle 
> such exceptions. Since we maintain a cache for clients in 
> XceiverClientManager, the corresponding client needs to be invalidated in the 
> cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to