[
https://issues.apache.org/jira/browse/YARN-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154128#comment-14154128
]
Jian He commented on YARN-2617:
-------------------------------
Thanks for updating !
- {{containerStatuses.add(status);}} is moved after this check
{{status.getContainerState() == ContainerState.COMPLETE}}. In some cases(e.g.
NM decommission), I think we still need to send the completeContainers across
so that RM knows this container completes.
- we may not need to change {{getNMContainerStatuses}}, as this method will be
invoked only once on re-register. I’m afraid not sending the whole containers
for recovery will hit some other race conditions.
> NM does not need to send finished container whose APP is not running to RM
> --------------------------------------------------------------------------
>
> Key: YARN-2617
> URL: https://issues.apache.org/jira/browse/YARN-2617
> Project: Hadoop YARN
> Issue Type: Bug
> Components: nodemanager
> Affects Versions: 2.6.0
> Reporter: Jun Gong
> Assignee: Jun Gong
> Fix For: 2.6.0
>
> Attachments: YARN-2617.2.patch, YARN-2617.3.patch, YARN-2617.patch
>
>
> We([~chenchun]) are testing RM work preserving restart and found the
> following logs when we ran a simple MapReduce task "PI". NM continuously
> reported completed containers whose Application had already finished while AM
> had finished.
> {code}
> 2014-09-26 17:00:42,228 INFO
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
> Null container completed...
> 2014-09-26 17:00:42,228 INFO
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
> Null container completed...
> 2014-09-26 17:00:43,230 INFO
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
> Null container completed...
> 2014-09-26 17:00:43,230 INFO
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
> Null container completed...
> 2014-09-26 17:00:44,233 INFO
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
> Null container completed...
> 2014-09-26 17:00:44,233 INFO
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler:
> Null container completed...
> {code}
> In the patch for YARN-1372, ApplicationImpl on NM should guarantee to clean
> up already completed applications. But it will only remove appId from
> 'app.context.getApplications()' when ApplicaitonImpl received evnet
> 'ApplicationEventType.APPLICATION_LOG_HANDLING_FINISHED' , however NM might
> receive this event for a long time or could not receive.
> * For NonAggregatingLogHandler, it wait for
> YarnConfiguration.NM_LOG_RETAIN_SECONDS which is 3 * 60 * 60 sec by default,
> then it will be scheduled to delete Application logs and send the event.
> * For LogAggregationService, it might fail(e.g. if user does not have HDFS
> write permission), and it will not send the event.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)