[ https://issues.apache.org/jira/browse/HDFS-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17875400#comment-17875400 ]
ASF GitHub Bot commented on HDFS-17542: --------------------------------------- hadoop-yetus commented on PR #6915: URL: https://github.com/apache/hadoop/pull/6915#issuecomment-2301434368 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| | +0 :ok: | reexec | 12m 15s | | Docker mode activated. | |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | |||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 45m 21s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 | | +1 :green_heart: | checkstyle | 1m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 27s | | trunk passed | | +1 :green_heart: | javadoc | 1m 10s | | trunk passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 | | +1 :green_heart: | javadoc | 1m 45s | | trunk passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 | | +1 :green_heart: | spotbugs | 3m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 36m 12s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 36m 33s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | |||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 58s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6915/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 116 unchanged - 1 fixed = 118 total (was 117) | | +1 :green_heart: | mvnsite | 1m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 | | +1 :green_heart: | javadoc | 1m 38s | | the patch passed with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 | | +1 :green_heart: | spotbugs | 3m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 36m 11s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | +1 :green_heart: | unit | 218m 12s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 370m 25s | | | | Subsystem | Report/Notes | |----------:|:-------------| | Docker | ClientAPI=1.46 ServerAPI=1.46 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6915/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6915 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 2c8155d72793 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dd74849c9d24bac57b6496f075803d75b0cc2c8d | | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6915/2/testReport/ | | Max. process+thread count | 4104 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6915/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > EC: Optimize the EC block reconstruction. > ----------------------------------------- > > Key: HDFS-17542 > URL: https://issues.apache.org/jira/browse/HDFS-17542 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Chenyu Zheng > Assignee: Chenyu Zheng > Priority: Major > Labels: pull-request-available > > The current reconstruction process of EC blocks is based on the original > contiguous blocks. It is mainly implemented through the work constructed by > computeReconstructionWorkForBlocks. It can be roughly divided into three > processes: > * scheduleReconstruction > * chooseTargets > * validateReconstructionWork > For ordinary contiguous blocks: > * (1) scheduleReconstruction > Select srcNodes as the source of the copy block according to the status of > each replica of the block. > * (2) chooseTargets > Select the target of the copy. > * (3) validateReconstructionWork > Add the copy command to srcNode, srcNode receives the command through > heartbeat, and executes the block copy from src to target. > For EC blocks: > (1) and (2) seems nearly same. However, whether to perform simple block copy > or block reconstruction for EC blocks is determined in (3). And when some > storage is busy, may result no work, it will lead to the problem described in > HDFS-17516. Even if no block copying or block reconstruction is generated, > pendingReconstruction and neededReconstruction will still be updated until > the block times out, which wastes the scheduling opportunity. > Because the decision of whether to perform block copy or block reconstruction > is made in (3), unnecessary liveBusyBlockIndices, and > excludeReconstructedIndices are introduced. We know many bugs are related > here. These should be avoided. > Improvements: > * Move the work of deciding whether to copy or reconstruct blocks from (3) > to (1). > Such improvements are more conducive to implementing the explicit > specification of the reconstruction block index mentioned in HDFS-16874. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org