[ https://issues.apache.org/jira/browse/YARN-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14068834#comment-14068834 ]
Karthik Kambatla commented on YARN-2273: ---------------------------------------- Thanks Wei. Few comments on the latest patch, some not specific to changes in this patch. # Can continuousSchedulingAttempt be package-private? # We should log the following at ERROR level {code} + } catch (Throwable ex) { + LOG.warn("Error while attempting scheduling for node " + node + + ": " + ex.toString(), ex); } {code} # When the scheduling thread is interrupted, shouldn't we actually stop the thread? What are the cases where we want to ignore an interruption? # Update the log message in the catch-block of InterruptedException - "Continuous scheduling thread interrupted." May be add "Exiting." if we do decide to shut the thread down. # In the test, do we need to call FS#reinitialize()? # In the test, should we catch all exceptions instead of just NPE > NPE in ContinuousScheduling Thread crippled RM after DN flap > ------------------------------------------------------------ > > Key: YARN-2273 > URL: https://issues.apache.org/jira/browse/YARN-2273 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler, resourcemanager > Affects Versions: 2.3.0, 2.4.1 > Environment: cdh5.0.2 wheezy > Reporter: Andy Skelton > Attachments: YARN-2273-replayException.patch, YARN-2273.patch, > YARN-2273.patch, YARN-2273.patch > > > One DN experienced memory errors and entered a cycle of rebooting and > rejoining the cluster. After the second time the node went away, the RM > produced this: > {code} > 2014-07-09 21:47:36,571 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: > Application attempt appattempt_1404858438119_4352_000001 released container > container_1404858438119_4352_01_000004 on node: host: > node-A16-R09-19.hadoop.dfw.wordpress.com:8041 #containers=0 > available=<memory:8192, vCores:8> used=<memory:0, vCores:0> with event: KILL > 2014-07-09 21:47:36,571 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: > Removed node node-A16-R09-19.hadoop.dfw.wordpress.com:8041 cluster capacity: > <memory:335872, vCores:328> > 2014-07-09 21:47:36,571 ERROR > org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread > Thread[ContinuousScheduling,5,main] threw an Exception. > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1044) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$NodeAvailableResourceComparator.compare(FairScheduler.java:1040) > at java.util.TimSort.countRunAndMakeAscending(TimSort.java:329) > at java.util.TimSort.sort(TimSort.java:203) > at java.util.TimSort.sort(TimSort.java:173) > at java.util.Arrays.sort(Arrays.java:659) > at java.util.Collections.sort(Collections.java:217) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousScheduling(FairScheduler.java:1012) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.access$600(FairScheduler.java:124) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$2.run(FairScheduler.java:1306) > at java.lang.Thread.run(Thread.java:744) > {code} > A few cycles later YARN was crippled. The RM was running and jobs could be > submitted but containers were not assigned and no progress was made. > Restarting the RM resolved it. -- This message was sent by Atlassian JIRA (v6.2#6252)