[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17801084#comment-17801084 ] ASF GitHub Bot commented on HDFS-17297: --- haiyang1987 commented on PR #6369: URL: https://github.com/apache/hadoop/pull/6369#issuecomment-1871684113 Thanks @tasanuma @ZanderXu for your review and merge !!! > The NameNode should remove block from the BlocksMap if the block is marked as > deleted. > -- > > Key: HDFS-17297 > URL: https://issues.apache.org/jira/browse/HDFS-17297 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > When call internalReleaseLease method: > {code:java} > boolean internalReleaseLease( > ... > int minLocationsNum = 1; > if (lastBlock.isStriped()) { > minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum(); > } > if (uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0) { > // There is no datanode reported to this block. > // may be client have crashed before writing data to pipeline. > // This blocks doesn't need any recovery. > // We can remove this block and close the file. > pendingFile.removeLastBlock(lastBlock); > finalizeINodeFileUnderConstruction(src, pendingFile, > iip.getLatestSnapshotId(), false); > ... > } > {code} > if the condition `uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0` is met during the execution of UNDER_RECOVERY > logic, the block is removed from the block list in the inode file and marked > as deleted. > However it is not removed from the BlocksMap, it may cause memory leak. > Therefore it is necessary to remove the block from the BlocksMap at this > point as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17801010#comment-17801010 ] ASF GitHub Bot commented on HDFS-17297: --- tasanuma commented on PR #6369: URL: https://github.com/apache/hadoop/pull/6369#issuecomment-1871202598 Cherry-picked to branch-3.3. > The NameNode should remove block from the BlocksMap if the block is marked as > deleted. > -- > > Key: HDFS-17297 > URL: https://issues.apache.org/jira/browse/HDFS-17297 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > When call internalReleaseLease method: > {code:java} > boolean internalReleaseLease( > ... > int minLocationsNum = 1; > if (lastBlock.isStriped()) { > minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum(); > } > if (uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0) { > // There is no datanode reported to this block. > // may be client have crashed before writing data to pipeline. > // This blocks doesn't need any recovery. > // We can remove this block and close the file. > pendingFile.removeLastBlock(lastBlock); > finalizeINodeFileUnderConstruction(src, pendingFile, > iip.getLatestSnapshotId(), false); > ... > } > {code} > if the condition `uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0` is met during the execution of UNDER_RECOVERY > logic, the block is removed from the block list in the inode file and marked > as deleted. > However it is not removed from the BlocksMap, it may cause memory leak. > Therefore it is necessary to remove the block from the BlocksMap at this > point as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800999#comment-17800999 ] ASF GitHub Bot commented on HDFS-17297: --- tasanuma commented on PR #6369: URL: https://github.com/apache/hadoop/pull/6369#issuecomment-1871167180 Merged. Thanks for fixing the issue, @haiyang1987. Thanks for your review, @ZanderXu. > The NameNode should remove block from the BlocksMap if the block is marked as > deleted. > -- > > Key: HDFS-17297 > URL: https://issues.apache.org/jira/browse/HDFS-17297 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > When call internalReleaseLease method: > {code:java} > boolean internalReleaseLease( > ... > int minLocationsNum = 1; > if (lastBlock.isStriped()) { > minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum(); > } > if (uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0) { > // There is no datanode reported to this block. > // may be client have crashed before writing data to pipeline. > // This blocks doesn't need any recovery. > // We can remove this block and close the file. > pendingFile.removeLastBlock(lastBlock); > finalizeINodeFileUnderConstruction(src, pendingFile, > iip.getLatestSnapshotId(), false); > ... > } > {code} > if the condition `uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0` is met during the execution of UNDER_RECOVERY > logic, the block is removed from the block list in the inode file and marked > as deleted. > However it is not removed from the BlocksMap, it may cause memory leak. > Therefore it is necessary to remove the block from the BlocksMap at this > point as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800998#comment-17800998 ] ASF GitHub Bot commented on HDFS-17297: --- tasanuma merged PR #6369: URL: https://github.com/apache/hadoop/pull/6369 > The NameNode should remove block from the BlocksMap if the block is marked as > deleted. > -- > > Key: HDFS-17297 > URL: https://issues.apache.org/jira/browse/HDFS-17297 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > When call internalReleaseLease method: > {code:java} > boolean internalReleaseLease( > ... > int minLocationsNum = 1; > if (lastBlock.isStriped()) { > minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum(); > } > if (uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0) { > // There is no datanode reported to this block. > // may be client have crashed before writing data to pipeline. > // This blocks doesn't need any recovery. > // We can remove this block and close the file. > pendingFile.removeLastBlock(lastBlock); > finalizeINodeFileUnderConstruction(src, pendingFile, > iip.getLatestSnapshotId(), false); > ... > } > {code} > if the condition `uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0` is met during the execution of UNDER_RECOVERY > logic, the block is removed from the block list in the inode file and marked > as deleted. > However it is not removed from the BlocksMap, it may cause memory leak. > Therefore it is necessary to remove the block from the BlocksMap at this > point as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17800475#comment-17800475 ] ASF GitHub Bot commented on HDFS-17297: --- haiyang1987 commented on PR #6369: URL: https://github.com/apache/hadoop/pull/6369#issuecomment-1869479650 Hi @tasanuma Could you please help me review this pr when you have free time ? Thank you very much. > The NameNode should remove block from the BlocksMap if the block is marked as > deleted. > -- > > Key: HDFS-17297 > URL: https://issues.apache.org/jira/browse/HDFS-17297 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > When call internalReleaseLease method: > {code:java} > boolean internalReleaseLease( > ... > int minLocationsNum = 1; > if (lastBlock.isStriped()) { > minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum(); > } > if (uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0) { > // There is no datanode reported to this block. > // may be client have crashed before writing data to pipeline. > // This blocks doesn't need any recovery. > // We can remove this block and close the file. > pendingFile.removeLastBlock(lastBlock); > finalizeINodeFileUnderConstruction(src, pendingFile, > iip.getLatestSnapshotId(), false); > ... > } > {code} > if the condition `uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0` is met during the execution of UNDER_RECOVERY > logic, the block is removed from the block list in the inode file and marked > as deleted. > However it is not removed from the BlocksMap, it may cause memory leak. > Therefore it is necessary to remove the block from the BlocksMap at this > point as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17798951#comment-17798951 ] ASF GitHub Bot commented on HDFS-17297: --- haiyang1987 commented on PR #6369: URL: https://github.com/apache/hadoop/pull/6369#issuecomment-1864355241 Hi @ayushtkn @Hexiaoqiao @ZanderXu @zhangshuyan0 @tomscut Could you please help me review this pr when you have free time ? Thank you very much. > The NameNode should remove block from the BlocksMap if the block is marked as > deleted. > -- > > Key: HDFS-17297 > URL: https://issues.apache.org/jira/browse/HDFS-17297 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > When call internalReleaseLease method: > {code:java} > boolean internalReleaseLease( > ... > int minLocationsNum = 1; > if (lastBlock.isStriped()) { > minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum(); > } > if (uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0) { > // There is no datanode reported to this block. > // may be client have crashed before writing data to pipeline. > // This blocks doesn't need any recovery. > // We can remove this block and close the file. > pendingFile.removeLastBlock(lastBlock); > finalizeINodeFileUnderConstruction(src, pendingFile, > iip.getLatestSnapshotId(), false); > ... > } > {code} > if the condition `uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0` is met during the execution of UNDER_RECOVERY > logic, the block is removed from the block list in the inode file and marked > as deleted. > However it is not removed from the BlocksMap, it may cause memory leak. > Therefore it is necessary to remove the block from the BlocksMap at this > point as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17798646#comment-17798646 ] ASF GitHub Bot commented on HDFS-17297: --- hadoop-yetus commented on PR #6369: URL: https://github.com/apache/hadoop/pull/6369#issuecomment-1863051345 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 39s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 32s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 29s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 51s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 32s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 180m 5s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 22s | | The patch does not generate ASF License warnings. | | | | 268m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6369 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux bdd4cfe9ca69 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 28ca3c419404bcf5b78062b81f5788831b781c0a | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/3/testReport/ | | Max. process+thread count | 4133 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This mes
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17798456#comment-17798456 ] ASF GitHub Bot commented on HDFS-17297: --- hadoop-yetus commented on PR #6369: URL: https://github.com/apache/hadoop/pull/6369#issuecomment-1862225690 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 21s | | trunk passed | | +1 :green_heart: | compile | 0m 45s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 55s | | trunk passed | | -1 :x: | shadedclient | 34m 52s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 41s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 41s | | the patch passed | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 36s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 31s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 30s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 53s | | the patch passed | | -1 :x: | shadedclient | 23m 0s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 42s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 107m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6369 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c33f92e02368 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dc032c9264cf9cfddd66312e378dbf1169df699b | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/2/testReport/ | | Max. process+thread count | 686 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/2/console |
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17798425#comment-17798425 ] ASF GitHub Bot commented on HDFS-17297: --- hadoop-yetus commented on PR #6369: URL: https://github.com/apache/hadoop/pull/6369#issuecomment-1862112592 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 8m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 0m 16s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 0m 17s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | compile | 0m 8s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -0 :warning: | checkstyle | 3m 4s | [/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | The patch fails to run checkstyle in hadoop-hdfs | | -1 :x: | mvnsite | 0m 22s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | -1 :x: | javadoc | 0m 22s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javadoc | 0m 21s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-hdfs in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. | | -1 :x: | spotbugs | 0m 21s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | +1 :green_heart: | shadedclient | 5m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 21s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 21s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 0m 21s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6369/1/artifact/out/patch-compile-hadoop-hdfs-proje
[jira] [Commented] (HDFS-17297) The NameNode should remove block from the BlocksMap if the block is marked as deleted.
[ https://issues.apache.org/jira/browse/HDFS-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17798420#comment-17798420 ] ASF GitHub Bot commented on HDFS-17297: --- haiyang1987 opened a new pull request, #6369: URL: https://github.com/apache/hadoop/pull/6369 ### Description of PR https://issues.apache.org/jira/browse/HDFS-17297 When call internalReleaseLease method: ``` boolean internalReleaseLease( ... int minLocationsNum = 1; if (lastBlock.isStriped()) { minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum(); } if (uc.getNumExpectedLocations() < minLocationsNum && lastBlock.getNumBytes() == 0) { // There is no datanode reported to this block. // may be client have crashed before writing data to pipeline. // This blocks doesn't need any recovery. // We can remove this block and close the file. pendingFile.removeLastBlock(lastBlock); finalizeINodeFileUnderConstruction(src, pendingFile, iip.getLatestSnapshotId(), false); ... } ``` if the condition `uc.getNumExpectedLocations() < minLocationsNum && lastBlock.getNumBytes() == 0` is met during the execution of UNDER_RECOVERY logic, the block is removed from the block list in the inode file and marked as deleted. However it is not removed from the BlocksMap, it may cause memory leak. Therefore it is necessary to remove the block from the BlocksMap at this point as well. > The NameNode should remove block from the BlocksMap if the block is marked as > deleted. > -- > > Key: HDFS-17297 > URL: https://issues.apache.org/jira/browse/HDFS-17297 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > > When call internalReleaseLease method: > {code:java} > boolean internalReleaseLease( > ... > int minLocationsNum = 1; > if (lastBlock.isStriped()) { > minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum(); > } > if (uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0) { > // There is no datanode reported to this block. > // may be client have crashed before writing data to pipeline. > // This blocks doesn't need any recovery. > // We can remove this block and close the file. > pendingFile.removeLastBlock(lastBlock); > finalizeINodeFileUnderConstruction(src, pendingFile, > iip.getLatestSnapshotId(), false); > ... > } > {code} > if the condition `uc.getNumExpectedLocations() < minLocationsNum && > lastBlock.getNumBytes() == 0` is met during the execution of UNDER_RECOVERY > logic, the block is removed from the block list in the inode file and marked > as deleted. > However it is not removed from the BlocksMap, it may cause memory leak. > Therefore it is necessary to remove the block from the BlocksMap at this > point as well. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org