[ 
https://issues.apache.org/jira/browse/YARN-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14059186#comment-14059186
 ] 

Tsuyoshi OZAWA commented on YARN-2273:
--------------------------------------

Make sense.

One additional point: should we add null check at the following point in 
{{continuousScheduling}} to avoid NPE? IIUC, {{getFSSchedulerNode(nodeId)}} can 
return null in this case.
{code}
-            if (Resources.fitsIn(minimumAllocation,
+           if (node != null && Resources.fitsIn(minimumAllocation,
                    node.getAvailableResource())) {
              attemptScheduling(node);
            }
{code}

> NPE in ContinuousScheduling Thread crippled RM after DN flap
> ------------------------------------------------------------
>
>                 Key: YARN-2273
>                 URL: https://issues.apache.org/jira/browse/YARN-2273
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler, resourcemanager
>    Affects Versions: 2.3.0
>         Environment: cdh5.0.2 wheezy
>            Reporter: Andy Skelton
>         Attachments: YARN-2273.patch
>
>
> One DN experienced memory errors and entered a cycle of rebooting and 
> rejoining the cluster. After the second time the node went away, the RM 
> produced this:
> {code}
> 2014-07-09 21:47:36,571 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
> Application attempt appattempt_1404858438119_4352_000001 released container 
> container_1404858438119_4352_01_000004 on node: host: 
> node-A16-R09-19.hadoop.dfw.wordpress.com:8041 #containers=0 
> available=<memory:8192, vCores:8> used=<memory:0, vCores:0> with event: KILL
> 2014-07-09 21:47:36,571 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: 
> Removed node node-A16-R09-19.hadoop.dfw.wordpress.com:8041 cluster capacity: 
> <memory:335872, vCores:328>
> 2014-07-09 21:47:36,571 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[ContinuousScheduling,5,main] threw an Exception.
> java.lang.NullPointerException
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1044)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1040)
>       at java.util.TimSort.countRunAndMakeAscending(TimSort.java:329)
>       at java.util.TimSort.sort(TimSort.java:203)
>       at java.util.TimSort.sort(TimSort.java:173)
>       at java.util.Arrays.sort(Arrays.java:659)
>       at java.util.Collections.sort(Collections.java:217)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousScheduling(FairScheduler.java:1012)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.access$600(FairScheduler.java:124)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$2.run(FairScheduler.java:1306)
>       at java.lang.Thread.run(Thread.java:744)
> {code}
> A few cycles later YARN was crippled. The RM was running and jobs could be 
> submitted but containers were not assigned and no progress was made. 
> Restarting the RM resolved it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to