[
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13707492#comment-13707492
]
Konstantin Shvachko commented on HDFS-4942:
-------------------------------------------
I was thinking about it and I feel that adding the retry-related fields and
flags in the RPC layer is not the best way. Mostly because the retry logic is
intended for a few HDFS methods only, while the new field and flag will be
serialized and de-serialized by everybody including DataNodes, Balancers,
MapReduce and Yarn.
I think a better way would be to use <clientName + callId> as a key to index
the retry cache entries. This will
- constrain changes to HDFS only
- avoid incompatible RPC changes that effect sub-projects
- limit serialization overhead to only the methods involved in the retry.
This will require making clientName unique as many recently advocated for.
Would that sound reasonable?
> Add retry cache support in Namenode
> -----------------------------------
>
> Key: HDFS-4942
> URL: https://issues.apache.org/jira/browse/HDFS-4942
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: ha, namenode
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
> Attachments: HDFSRetryCache.pdf
>
>
> In current HA mechanism with FailoverProxyProvider and non HA setups with
> RetryProxy retry a request from the RPC layer. If the retried request has
> already been processed at the namenode, the subsequent attempts fail for
> non-idempotent operations such as create, append, delete, rename etc. This
> will cause application failures during HA failover, network issues etc.
> This jira proposes adding retry cache at the namenode to handle these
> failures. More details in the comments.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira