[ 
https://issues.apache.org/jira/browse/HDFS-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17902193#comment-17902193
 ] 

ASF GitHub Bot commented on HDFS-17496:
---------------------------------------

hfutatzhanghb opened a new pull request, #7196:
URL: https://github.com/apache/hadoop/pull/7196

   ### Description of PR
   Refer to HDFS-17496.
   
   This PR has two benifits:
   1、improve datanode's throughout when using faster disk like SSD.
   2、improve datanode's throughout when datanode has large disk capacity, this 
PR can mitigate volume lock contention when ioutil  is consistently high.
   
   We used NvmeSSD as volumes in datanodes and performed some stress tests.
   
   We found that NvmeSSD and HDD disks achieve similar performance when create 
lots of small files, such as 10KB.
   
   This phenomenon is counterintuitive.  After analyzing the metric monitoring 
, we found that fsdataset lock became the bottleneck in high concurrency 
scenario.
   
    Currently, we have two level locks which are BLOCK_POOL and VOLUME.  We can 
further split the volume lock to DIR lock.
   
   DIR lock is defined as below: given a blockid, we can determine which subdir 
this block will be placed in finalized dir. We just use 
subdir[0-31]/subdir[0-31] as the name of DIR lock.
   
   More details, please refer to method DatanodeUtil#idToBlockDir:
   
   ```java
     public static File idToBlockDir(File root, long blockId) {
       int d1 = (int) ((blockId >> 16) & 0x1F);
       int d2 = (int) ((blockId >> 8) & 0x1F);
       String path = DataStorage.BLOCK_SUBDIR_PREFIX + d1 + SEP +
           DataStorage.BLOCK_SUBDIR_PREFIX + d2;
       return new File(root, path);
     } 
   ```
   
   The performance comparison is as below:
   
   experimental setup:
   
   3 DataNodes with single disk.
   
   10 Cients concurrent write and delete files after writing.
   
   550 threads per Client.
   
   
![image](https://github.com/apache/hadoop/assets/25115709/5fc1c65a-9502-4e54-b0d3-119a0eab075e)
   
   
   specially, evictBlocks, onCompleteLazyPersist, updateReplicaUnderRecovery we 
don't modify.
   
   




> DataNode supports more fine-grained dataset lock based on blockid
> -----------------------------------------------------------------
>
>                 Key: HDFS-17496
>                 URL: https://issues.apache.org/jira/browse/HDFS-17496
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: farmmamba
>            Assignee: farmmamba
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: image-2024-04-23-16-17-07-057.png
>
>
> Recently, we used NvmeSSD as volumes in datanodes and performed some stress 
> tests.
> We found that NvmeSSD and HDD disks achieve similar performance when create 
> lots of small files, such as 10KB.
> This phenomenon is counterintuitive.  After analyzing the metric monitoring , 
> we found that fsdataset lock became the bottleneck in high concurrency 
> scenario.
>  
> Currently, we have two level locks which are BLOCK_POOL and VOLUME.  We can 
> further split the volume lock to DIR lock.
> DIR lock is defined as below: given a blockid, we can determine which subdir 
> this block will be placed in finalized dir. We just use 
> subdir[0-31]/subdir[0-31] as the
> name of DIR lock.
> More details, please refer to method DatanodeUtil#idToBlockDir:
> {code:java}
>   public static File idToBlockDir(File root, long blockId) {
>     int d1 = (int) ((blockId >> 16) & 0x1F);
>     int d2 = (int) ((blockId >> 8) & 0x1F);
>     String path = DataStorage.BLOCK_SUBDIR_PREFIX + d1 + SEP +
>         DataStorage.BLOCK_SUBDIR_PREFIX + d2;
>     return new File(root, path);
>   } {code}
> The performance comparison is as below:
> experimental setup:
> 3 DataNodes with single disk.
> 10 Cients concurrent write and delete files after writing.
> 550 threads per Client.
> !image-2024-04-23-16-17-07-057.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to