[
https://issues.apache.org/jira/browse/HADOOP-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Chansler updated HADOOP-3989:
------------------------------------
Component/s: dfs
> Secondary Namenode: Limit number of retries when fsimage/edits transfer fails
> -----------------------------------------------------------------------------
>
> Key: HADOOP-3989
> URL: https://issues.apache.org/jira/browse/HADOOP-3989
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Koji Noguchi
> Priority: Minor
>
> When hitting HADOOP-3980, secondary namenode kept on pulling gigs of
> fsimage/edits every 10 minutes which slowed down the namenode significantly.
> When namenode is down, I'd like the secondary namenode to keep on retrying
> to connect. However, when pull/push of large files keep on failing, I'd like
> a upper limit on the number of retries. Either shutdown or sleep for
> _fs.checkpoint.period_ seconds.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.