[
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698423#comment-13698423
]
Chris Nauroth commented on HDFS-4942:
-------------------------------------
The proposal looks good, and I'll be interested to see the analysis of the
individual RPC calls. Reminder on something that came up in offline
conversation: it appears that we can change
{{ClientProtocol#getDataEncryptionKey}} to annotate it as Idempotent. It
doesn't appear to mutate state. If a retry causes creation of multiple keys,
that shouldn't be a problem.
{quote}
Given that we plan on adding a unique identifier to every RPC request, should
we get this change done before 2.1.0-beta rc2 is built? This way 2.1.0-beta
clients can utilize retry cache as well.
{quote}
+1 for this idea. Adding the UUID now would be a low-risk change.
> Add retry cache support in Namenode
> -----------------------------------
>
> Key: HDFS-4942
> URL: https://issues.apache.org/jira/browse/HDFS-4942
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: ha, namenode
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
> Attachments: HDFSRetryCache.pdf
>
>
> In current HA mechanism with FailoverProxyProvider and non HA setups with
> RetryProxy retry a request from the RPC layer. If the retried request has
> already been processed at the namenode, the subsequent attempts fail for
> non-idempotent operations such as create, append, delete, rename etc. This
> will cause application failures during HA failover, network issues etc.
> This jira proposes adding retry cache at the namenode to handle these
> failures. More details in the comments.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira