[
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13709972#comment-13709972
]
Konstantin Shvachko commented on HDFS-4942:
-------------------------------------------
Using clientId for multiplexing RPC connections is an interesting use case.
Check the uniqueness guarantees though.
> The current solution will be useful for all the other applications
Is that a hypothetical opportunity or you have any particular use cases in mind
for Yarn? Would be good to know.
>> avoid incompatible RPC changes that effect sub-projects
> I am not sure what you mean by this.
I mean that you are building a retry cache for HDFS and making changes
incompatible for all other projects (rather than for HDFS only). So I am trying
to understand what value it brings to others.
> Add retry cache support in Namenode
> -----------------------------------
>
> Key: HDFS-4942
> URL: https://issues.apache.org/jira/browse/HDFS-4942
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: ha, namenode
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
> Attachments: HDFSRetryCache.pdf
>
>
> In current HA mechanism with FailoverProxyProvider and non HA setups with
> RetryProxy retry a request from the RPC layer. If the retried request has
> already been processed at the namenode, the subsequent attempts fail for
> non-idempotent operations such as create, append, delete, rename etc. This
> will cause application failures during HA failover, network issues etc.
> This jira proposes adding retry cache at the namenode to handle these
> failures. More details in the comments.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira