[ 
https://issues.apache.org/jira/browse/HDFS-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17479090#comment-17479090
 ] 

Yuanbo Liu commented on HDFS-15180:
-----------------------------------

[~sodonnell]  Thanks for your comments.
There's a background that needs to be clarified.
Nowadays, the storage machine becomes bigger and bigger. We've seen 12TB x 36 
disks (which means 436TB of single datanode) in production environment. Global 
lock will be the key impact of IO performance, we'd be glad if this Jira has 
further progress to discuss or even be merged. 

>  DataNode FsDatasetImpl Fine-Grained Locking via BlockPool.
> -----------------------------------------------------------
>
>                 Key: HDFS-15180
>                 URL: https://issues.apache.org/jira/browse/HDFS-15180
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>    Affects Versions: 3.2.0
>            Reporter: Qi Zhu
>            Assignee: Mingxiang Li
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HDFS-15180.001.patch, HDFS-15180.002.patch, 
> HDFS-15180.003.patch, HDFS-15180.004.patch, 
> image-2020-03-10-17-22-57-391.png, image-2020-03-10-17-31-58-830.png, 
> image-2020-03-10-17-34-26-368.png, image-2020-04-09-11-20-36-459.png
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Now the FsDatasetImpl datasetLock is heavy, when their are many namespaces in 
> big cluster. If we can split the FsDatasetImpl datasetLock via blockpool. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to