[ https://issues.apache.org/jira/browse/HDFS-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17913599#comment-17913599 ]
ASF GitHub Bot commented on HDFS-17496: --------------------------------------- hfutatzhanghb commented on code in PR #7280: URL: https://github.com/apache/hadoop/pull/7280#discussion_r1917922216 ########## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java: ########## @@ -127,6 +131,32 @@ public static File idToBlockDir(File root, long blockId) { return new File(root, path); } + /** + * Take an example. + * We hava a block with blockid mapping to: + * "/data1/hadoop/hdfs/datanode/current/BP-xxxx/current/finalized/subdir0/subdir0" + * We return "subdir0/subdir0". + * @param blockId the block id. + * @return two-level subdir string where block will be stored. + */ + public static String idToBlockDirSuffixName(long blockId) { + int d1 = (int) ((blockId >> 16) & 0x1F); + int d2 = (int) ((blockId >> 8) & 0x1F); Review Comment: @Hexiaoqiao yes, `blockId >> 16` and `blockId >> 8` are the the same way with `idToBlockDir`. Have refactor those methods and make literal `0x1F` to be a static final field in DatanodeUtil. Thanks a lot for your suggestions. > DataNode supports more fine-grained dataset lock based on blockid > ----------------------------------------------------------------- > > Key: HDFS-17496 > URL: https://issues.apache.org/jira/browse/HDFS-17496 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode > Reporter: farmmamba > Assignee: farmmamba > Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > Attachments: image-2024-04-23-16-17-07-057.png > > > Recently, we used NvmeSSD as volumes in datanodes and performed some stress > tests. > We found that NvmeSSD and HDD disks achieve similar performance when create > lots of small files, such as 10KB. > This phenomenon is counterintuitive. After analyzing the metric monitoring , > we found that fsdataset lock became the bottleneck in high concurrency > scenario. > > Currently, we have two level locks which are BLOCK_POOL and VOLUME. We can > further split the volume lock to DIR lock. > DIR lock is defined as below: given a blockid, we can determine which subdir > this block will be placed in finalized dir. We just use > subdir[0-31]/subdir[0-31] as the > name of DIR lock. > More details, please refer to method DatanodeUtil#idToBlockDir: > {code:java} > public static File idToBlockDir(File root, long blockId) { > int d1 = (int) ((blockId >> 16) & 0x1F); > int d2 = (int) ((blockId >> 8) & 0x1F); > String path = DataStorage.BLOCK_SUBDIR_PREFIX + d1 + SEP + > DataStorage.BLOCK_SUBDIR_PREFIX + d2; > return new File(root, path); > } {code} > The performance comparison is as below: > experimental setup: > 3 DataNodes with single disk. > 10 Cients concurrent write and delete files after writing. > 550 threads per Client. > !image-2024-04-23-16-17-07-057.png! > -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org