[
https://issues.apache.org/jira/browse/HDFS-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203837#comment-17203837
]
Stephen O'Donnell commented on HDFS-15382:
------------------------------------------
[~LiJinglun] There is a similar change already committed on trunk in HDFS-15150
and HDFS-15160. These changes do not go as far as the changes suggested here,
but they are simpler and hence easier to backport / review. Some people who
tried these patches out reported good results in reducing DN pauses.
[~hexiaoqiao] The approach here does seem like a good one and worth exploring.
The concern I have, is around the complexity of it and the amount of change it
introduces. It would be great to also benchmark the simple read/write lock
along with the change here to see how they compare.
I also think it would be worth exploring where the lock is held during IO
operations (ie potentially held a long time) and try to avoid holding the lock
during a disk IO. If we could do this on the common code paths (create/write
block, read block) then it would make most problems go away I think.
There is also the recent changes to remove the locking in DirectoryScanner,
which we are seeing cause a lot of problems on the 3.x branches - HDFS-15415
> Split FsDatasetImpl from blockpool lock to blockpool volume lock
> -----------------------------------------------------------------
>
> Key: HDFS-15382
> URL: https://issues.apache.org/jira/browse/HDFS-15382
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Aiphago
> Assignee: Aiphago
> Priority: Major
> Fix For: 3.2.1
>
> Attachments: HDFS-15382-sample.patch, image-2020-06-02-1.png,
> image-2020-06-03-1.png
>
>
> In HDFS-15180 we split lock to blockpool grain size.But when one volume is in
> heavy load and will block other request which in same blockpool but different
> volume.So we split lock to two leval to avoid this happend.And to improve
> datanode performance.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]