[ 
https://issues.apache.org/jira/browse/YARN-6487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780528#comment-16780528
 ] 

Zhaohui Xin edited comment on YARN-6487 at 2/28/19 1:45 PM:
------------------------------------------------------------

Hi [~wilfreds]. Please correct me if I am wrong. When a large number of nodes' 
heartbeats trigger original scheduling, the continuous scheduling thread will 
be starved because of lock conflict.
{quote}The side effect is however that when a cluster grows (100+ nodes) the 
number of heartbeats that needed processing started interfering with the 
continuous scheduling thread and other internal threads. This does cause thread 
starvation and in the worst case scheduling comes to a standstill.
{quote}


was (Author: uranus):
Hi [~wilfreds]. Please correct me if I am wrong. When a large number of nodes' 
heartbeats trigger original scheduling, the continuous scheduling thread will 
be starved because lock conflict.
{quote}The side effect is however that when a cluster grows (100+ nodes) the 
number of heartbeats that needed processing started interfering with the 
continuous scheduling thread and other internal threads. This does cause thread 
starvation and in the worst case scheduling comes to a standstill.
{quote}

> FairScheduler: remove continuous scheduling (YARN-1010)
> -------------------------------------------------------
>
>                 Key: YARN-6487
>                 URL: https://issues.apache.org/jira/browse/YARN-6487
>             Project: Hadoop YARN
>          Issue Type: Task
>          Components: fairscheduler
>    Affects Versions: 2.7.0
>            Reporter: Wilfred Spiegelenburg
>            Assignee: Wilfred Spiegelenburg
>            Priority: Major
>
> Remove deprecated FairScheduler continuous scheduler code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to