Naganarasimha G R created YARN-6025:
---------------------------------------
Summary: Few issues in synchronization in CapacityScheduler &
AbstractYarnScheduler
Key: YARN-6025
URL: https://issues.apache.org/jira/browse/YARN-6025
Project: Hadoop YARN
Issue Type: Sub-task
Components: capacity scheduler, scheduler
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
YARN-3139 does optimization on the locks by introducing ReentrantReadWriteLock
to remove synchronized but seems to have some issues.
# CapacityScheduler
#* nodeUpdate(RMNode) need not be synchronized, as its the only one to be in
the class
#* setLastNodeUpdateTime in nodeUpdate needs to be updated with readLock ? then
getLastNodeUpdateTime is done without any lock and more over its volatile.
#* getUserGroupMappingPlacementRule need not be public as its held called
within and not used in test and further is called from initScheduler and
reinitialize where both are holding write locks so i presume getting read locks
are of no use.
# AbstractYarnScheduler
#* recoverContainersOnNode is synchronized as well as holds write lock on the
complete method so i presume we do not require synchronized here.
#* nodeUpdate method too is synchronized but if i see the updates done inside i
do not see any place where node update from two different nodes will have any
issues (except for schedulerHealth which is taken care internally with
concurrentHashMap), And even if require we could better use write lock. (also
depends on the decision of next point)
#* readLock is only used in containerLaunchedOnNode which i am not completely
sure whether its required to have a read lock here, suppose we do not require
then whether there is any use of read write locks in AbstractYarnScheduler as
in general there is performance overhead in using readwrite lock over
synchronized blocks on frequently accessed code path.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]