[ https://issues.apache.org/jira/browse/HDFS-17496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17916697#comment-17916697 ]
ASF GitHub Bot commented on HDFS-17496: --------------------------------------- hadoop-yetus commented on PR #7280: URL: https://github.com/apache/hadoop/pull/7280#issuecomment-2612366253 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| | +0 :ok: | reexec | 0m 19s | | Docker mode activated. | |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | |||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 27s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_432-8u432-ga~us1-0ubuntu2~20.04-ga | | +1 :green_heart: | checkstyle | 0m 34s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 43s | | trunk passed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Private Build-1.8.0_432-8u432-ga~us1-0ubuntu2~20.04-ga | | +1 :green_heart: | spotbugs | 1m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 40s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 20m 53s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | |||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_432-8u432-ga~us1-0ubuntu2~20.04-ga | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 28s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | -1 :x: | javadoc | 0m 33s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7280/14/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04. | | +1 :green_heart: | javadoc | 1m 4s | | the patch passed with JDK Private Build-1.8.0_432-8u432-ga~us1-0ubuntu2~20.04-ga | | +1 :green_heart: | spotbugs | 1m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 43s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | +1 :green_heart: | unit | 215m 45s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 292m 6s | | | | Subsystem | Report/Notes | |----------:|:-------------| | Docker | ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7280/14/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/7280 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 49d8b7156649 5.15.0-130-generic #140-Ubuntu SMP Wed Dec 18 17:59:53 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8d5965ede91b48ade12c13bd3405ad94991e767a | | Default Java | Private Build-1.8.0_432-8u432-ga~us1-0ubuntu2~20.04-ga | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_432-8u432-ga~us1-0ubuntu2~20.04-ga | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7280/14/testReport/ | | Max. process+thread count | 3688 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7280/14/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > DataNode supports more fine-grained dataset lock based on blockid > ----------------------------------------------------------------- > > Key: HDFS-17496 > URL: https://issues.apache.org/jira/browse/HDFS-17496 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode > Reporter: farmmamba > Assignee: farmmamba > Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > Attachments: image-2024-04-23-16-17-07-057.png > > > Recently, we used NvmeSSD as volumes in datanodes and performed some stress > tests. > We found that NvmeSSD and HDD disks achieve similar performance when create > lots of small files, such as 10KB. > This phenomenon is counterintuitive. After analyzing the metric monitoring , > we found that fsdataset lock became the bottleneck in high concurrency > scenario. > > Currently, we have two level locks which are BLOCK_POOL and VOLUME. We can > further split the volume lock to DIR lock. > DIR lock is defined as below: given a blockid, we can determine which subdir > this block will be placed in finalized dir. We just use > subdir[0-31]/subdir[0-31] as the > name of DIR lock. > More details, please refer to method DatanodeUtil#idToBlockDir: > {code:java} > public static File idToBlockDir(File root, long blockId) { > int d1 = (int) ((blockId >> 16) & 0x1F); > int d2 = (int) ((blockId >> 8) & 0x1F); > String path = DataStorage.BLOCK_SUBDIR_PREFIX + d1 + SEP + > DataStorage.BLOCK_SUBDIR_PREFIX + d2; > return new File(root, path); > } {code} > The performance comparison is as below: > experimental setup: > 3 DataNodes with single disk. > 10 Cients concurrent write and delete files after writing. > 550 threads per Client. > !image-2024-04-23-16-17-07-057.png! > -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org