[ https://issues.apache.org/jira/browse/HDFS-16785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631619#comment-17631619 ]
ASF GitHub Bot commented on HDFS-16785: --------------------------------------- hadoop-yetus commented on PR #4945: URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1310202912 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| | +0 :ok: | reexec | 46m 0s | | Docker mode activated. | |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | |||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 8s | | trunk passed | | +1 :green_heart: | compile | 1m 30s | | trunk passed | | +1 :green_heart: | checkstyle | 1m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 37s | | trunk passed | | +1 :green_heart: | javadoc | 1m 45s | | trunk passed | | +1 :green_heart: | spotbugs | 3m 49s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 7s | | branch has no errors when building and testing our client artifacts. | |||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 23s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 1s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 26s | | the patch passed | | +1 :green_heart: | javadoc | 1m 28s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 31s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | -1 :x: | unit | 347m 49s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4945/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 16s | | The patch does not generate ASF License warnings. | | | | 506m 33s | | | | Reason | Tests | |-------:|:------| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions | | Subsystem | Report/Notes | |----------:|:-------------| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4945/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4945 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 35e578afdaa6 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 61a5d265943a4682c93a3d48f23eba8c7d44cf37 | | Default Java | Red Hat, Inc.-1.8.0_352-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4945/2/testReport/ | | Max. process+thread count | 2237 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4945/2/console | | versions | git=2.9.5 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > DataNode hold BP write lock to scan disk > ---------------------------------------- > > Key: HDFS-16785 > URL: https://issues.apache.org/jira/browse/HDFS-16785 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: ZanderXu > Assignee: ZanderXu > Priority: Major > Labels: pull-request-available > > When patching the fine-grained locking of datanode, I found that `addVolume` > will hold the write block of the BP lock to scan the new volume to get the > blocks. If we try to add one full volume that was fixed offline before, i > will hold the write lock for a long time. > The related code as bellows: > {code:java} > for (final NamespaceInfo nsInfo : nsInfos) { > String bpid = nsInfo.getBlockPoolID(); > try (AutoCloseDataSetLock l = lockManager.writeLock(LockLevel.BLOCK_POOl, > bpid)) { > fsVolume.addBlockPool(bpid, this.conf, this.timer); > fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker); > } catch (IOException e) { > LOG.warn("Caught exception when adding " + fsVolume + > ". Will throw later.", e); > exceptions.add(e); > } > } {code} > And I noticed that this lock is added by HDFS-15382, means that this logic is > not in lock before. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org