[ 
https://issues.apache.org/jira/browse/YARN-5964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15730429#comment-15730429
 ] 

Karthik Kambatla commented on YARN-5964:
----------------------------------------

Do you have continuous scheduling turned on? On larger clusters, we have 
noticed that could lead to lock contention. 

In any case, I do agree there is need to have more finer grained locks. 

> fairscheduler use too many object lock, leads to low performance
> ----------------------------------------------------------------
>
>                 Key: YARN-5964
>                 URL: https://issues.apache.org/jira/browse/YARN-5964
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: fairscheduler
>    Affects Versions: 2.7.1
>         Environment: CentOS-7.1
>            Reporter: zhengchenyu
>            Priority: Critical
>             Fix For: 2.7.1
>
>   Original Estimate: 2m
>  Remaining Estimate: 2m
>
> When too many applications are running, we found that client couldn't submit 
> the application, and a high callqueuelength of port 8032. I catch the jstack 
> of resourcemanager when callqueuelength is too high. I found that the thread 
> "IPC Server handler xxx on 8032" are waitting for the object lock of 
> FairScheduler, nodeupdate holds the lock of the FairScheduler. Maybe high 
> process time leads to the phenomenon that client can't submit the 
> application. 
> Here I don't consider the problem that client can't submit the application, 
> only estimates the performance of the fairscheduler. We can see too many 
> function which needs object lock are used, the granularity of object lock is 
> too big. For example, nodeUpdate and getAppWeight wanna hold the same object 
> lock. It is unresonable and inefficiency. I recommand that the low 
> granularity lock replaces the current lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to