[ 
https://issues.apache.org/jira/browse/YARN-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14059084#comment-14059084
 ] 

Tsuyoshi OZAWA commented on YARN-2273:
--------------------------------------

Hi [~ywskycn], thank you for taking this JIRA. It looks race condition between 
{{new ArrayList<NodeId>(nodes.keySet());}} and {{Collections.sort}}. One 
straight way to fix is moving {{new ArrayList<NodeId>(nodes.keySet());}} into 
synchronized block. I think it's simpler way but one concern is to degrade 
performance because of lock. Wei, [~sandyr], what do you think?
{code}
  private void continuousScheduling() {
    while (true) {
      List<NodeId> nodeIdList = new ArrayList<NodeId>(nodes.keySet());
      // Sort the nodes by space available on them, so that we offer
      // containers on emptier nodes first, facilitating an even spread. This
      // requires holding the scheduler lock, so that the space available on a
      // node doesn't change during the sort.
      synchronized (this) {
        Collections.sort(nodeIdList, nodeAvailableResourceComparator);
      }
      ..
   }
{code}

> NPE in ContinuousScheduling Thread crippled RM after DN flap
> ------------------------------------------------------------
>
>                 Key: YARN-2273
>                 URL: https://issues.apache.org/jira/browse/YARN-2273
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler, resourcemanager
>    Affects Versions: 2.3.0
>         Environment: cdh5.0.2 wheezy
>            Reporter: Andy Skelton
>         Attachments: YARN-2273.patch
>
>
> One DN experienced memory errors and entered a cycle of rebooting and 
> rejoining the cluster. After the second time the node went away, the RM 
> produced this:
> {code}
> 2014-07-09 21:47:36,571 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
> Application attempt appattempt_1404858438119_4352_000001 released container 
> container_1404858438119_4352_01_000004 on node: host: 
> node-A16-R09-19.hadoop.dfw.wordpress.com:8041 #containers=0 
> available=<memory:8192, vCores:8> used=<memory:0, vCores:0> with event: KILL
> 2014-07-09 21:47:36,571 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
> Removed node node-A16-R09-19.hadoop.dfw.wordpress.com:8041 cluster capacity: 
> <memory:335872, vCores:328>
> 2014-07-09 21:47:36,571 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[ContinuousScheduling,5,main] threw an Exception.
> java.lang.NullPointerException
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1044)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1040)
>       at java.util.TimSort.countRunAndMakeAscending(TimSort.java:329)
>       at java.util.TimSort.sort(TimSort.java:203)
>       at java.util.TimSort.sort(TimSort.java:173)
>       at java.util.Arrays.sort(Arrays.java:659)
>       at java.util.Collections.sort(Collections.java:217)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousScheduling(FairScheduler.java:1012)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.access$600(FairScheduler.java:124)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$2.run(FairScheduler.java:1306)
>       at java.lang.Thread.run(Thread.java:744)
> {code}
> A few cycles later YARN was crippled. The RM was running and jobs could be 
> submitted but containers were not assigned and no progress was made. 
> Restarting the RM resolved it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to