[
https://issues.apache.org/jira/browse/MAPREDUCE-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15210196#comment-15210196
]
Laxman commented on MAPREDUCE-6659:
-----------------------------------
Please note that this issue happens with lost nodes (i.e, Unreachable hosts).
NM crash with a reachable host is exhibiting a totally a different expected
retry behavior. There liveness configurations are coming into play
(yarn.resourcemanager.container.liveness-monitor.interval-ms,
yarn.nm.liveness-monitor.expiry-interval-ms,
yarn.am.liveness-monitor.expiry-interval-ms) as expected.
> Mapreduce App master waits long to kill containers on lost nodes.
> -----------------------------------------------------------------
>
> Key: MAPREDUCE-6659
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6659
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: mr-am
> Affects Versions: 2.6.0
> Reporter: Laxman
>
> MR Application master waits for very long time to cleanup and relaunch the
> tasks on lost nodes. Wait time is actually 2.5 hours
> (ipc.client.connect.max.retries * ipc.client.connect.max.retries.on.timeouts
> * ipc.client.connect.timeout = 10 * 45 * 20 = 9000 seconds = 2.5 hours)
> Some similar issue related in RM-AM rpc protocol is fixed in YARN-3809.
> As fixed in YARN-3809, we may need to introduce new configurations to control
> this RPC retry behavior.
> Also, I feel this total retry time should honor and capped maximum to global
> task time out (mapreduce.task.timeout = 600000 default)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)