[jira] [Work logged] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations
[ https://issues.apache.org/jira/browse/HDFS-16024?focusedWorklogId=599623=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599623 ] ASF GitHub Bot logged work on HDFS-16024: - Author: ASF GitHub Bot Created on: 20/May/21 05:51 Start Date: 20/May/21 05:51 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3009: URL: https://github.com/apache/hadoop/pull/3009#issuecomment-844720974 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 50s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 33s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 26m 50s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 108m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3009/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3009 | | JIRA Issue | HDFS-16024 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 99de4f31a898 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 088a40757d2af2a13f021fbbf954006e759b9a96 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3009/8/testReport/ | | Max. process+thread count | 2290 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3009/8/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT
[jira] [Commented] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations
[ https://issues.apache.org/jira/browse/HDFS-16024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348070#comment-17348070 ] Hadoop QA commented on HDFS-16024: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 51s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} codespell {color} | {color:blue} 0m 1s{color} | | {color:blue} codespell was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 50s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 1m 33s{color} | | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 44s{color} | | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} blanks {color} | {color:green} 0m 0s{color} | | {color:green} The patch has no blanks issues. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 1m 27s{color} | | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 1s{color} | | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 50s{color} | | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | | {color:green} The patch
[jira] [Work logged] (HDFS-16018) Optimize the display of hdfs "count -e" or "count -t" command
[ https://issues.apache.org/jira/browse/HDFS-16018?focusedWorklogId=599589=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599589 ] ASF GitHub Bot logged work on HDFS-16018: - Author: ASF GitHub Bot Created on: 20/May/21 03:24 Start Date: 20/May/21 03:24 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #2994: URL: https://github.com/apache/hadoop/pull/2994#issuecomment-844655803 @whbing Thanks for contribution, merged -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 599589) Time Spent: 1.5h (was: 1h 20m) > Optimize the display of hdfs "count -e" or "count -t" command > - > > Key: HDFS-16018 > URL: https://issues.apache.org/jira/browse/HDFS-16018 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient >Reporter: Hongbing Wang >Assignee: Hongbing Wang >Priority: Minor > Labels: pull-request-available > Attachments: fs_count_fixed.png, fs_count_origin.png > > Time Spent: 1.5h > Remaining Estimate: 0h > > The display of `fs -count -e`or `fs -count -t` is not aligned. > *Current display:* > *!fs_count_origin.png|width=1184,height=156!* > *Fixed display:* > *!fs_count_fixed.png|width=1217,height=157!* -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16018) Optimize the display of hdfs "count -e" or "count -t" command
[ https://issues.apache.org/jira/browse/HDFS-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei resolved HDFS-16018. Fix Version/s: 3.4.0 Resolution: Fixed > Optimize the display of hdfs "count -e" or "count -t" command > - > > Key: HDFS-16018 > URL: https://issues.apache.org/jira/browse/HDFS-16018 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient >Reporter: Hongbing Wang >Assignee: Hongbing Wang >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: fs_count_fixed.png, fs_count_origin.png > > Time Spent: 1.5h > Remaining Estimate: 0h > > The display of `fs -count -e`or `fs -count -t` is not aligned. > *Current display:* > *!fs_count_origin.png|width=1184,height=156!* > *Fixed display:* > *!fs_count_fixed.png|width=1217,height=157!* -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16018) Optimize the display of hdfs "count -e" or "count -t" command
[ https://issues.apache.org/jira/browse/HDFS-16018?focusedWorklogId=599588=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599588 ] ASF GitHub Bot logged work on HDFS-16018: - Author: ASF GitHub Bot Created on: 20/May/21 03:24 Start Date: 20/May/21 03:24 Worklog Time Spent: 10m Work Description: ferhui merged pull request #2994: URL: https://github.com/apache/hadoop/pull/2994 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 599588) Time Spent: 1h 20m (was: 1h 10m) > Optimize the display of hdfs "count -e" or "count -t" command > - > > Key: HDFS-16018 > URL: https://issues.apache.org/jira/browse/HDFS-16018 > Project: Hadoop HDFS > Issue Type: Improvement > Components: dfsclient >Reporter: Hongbing Wang >Assignee: Hongbing Wang >Priority: Minor > Labels: pull-request-available > Attachments: fs_count_fixed.png, fs_count_origin.png > > Time Spent: 1h 20m > Remaining Estimate: 0h > > The display of `fs -count -e`or `fs -count -t` is not aligned. > *Current display:* > *!fs_count_origin.png|width=1184,height=156!* > *Fixed display:* > *!fs_count_fixed.png|width=1217,height=157!* -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16031) Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
[ https://issues.apache.org/jira/browse/HDFS-16031?focusedWorklogId=599578=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599578 ] ASF GitHub Bot logged work on HDFS-16031: - Author: ASF GitHub Bot Created on: 20/May/21 02:36 Start Date: 20/May/21 02:36 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3027: URL: https://github.com/apache/hadoop/pull/3027#issuecomment-844637826 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 30s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 381m 14s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3027/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 473m 18s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3027/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3027 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b2cd9727e938 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 920d33926c8121a91d23f49a0b79dd97d40ddeb9 | | Default Java |
[jira] [Updated] (HDFS-16029) Divide by zero bug in InstrumentationService.java
[ https://issues.apache.org/jira/browse/HDFS-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiyuan GUO updated HDFS-16029: -- Component/s: (was: security) libhdfs > Divide by zero bug in InstrumentationService.java > - > > Key: HDFS-16029 > URL: https://issues.apache.org/jira/browse/HDFS-16029 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs >Reporter: Yiyuan GUO >Priority: Major > Labels: easy-fix, security > > In the file _lib/service/instrumentation/InstrumentationService.java,_ the > method > _Timer.getValues_ has the following > [code|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L236]: > {code:java} > long[] getValues() { > .. > int limit = (full) ? size : (last + 1); > .. > values[AVG_TOTAL] = values[AVG_TOTAL] / limit; > } > {code} > The variable _limit_ is used as a divisor. However, its value may be equal to > _last + 1,_ which can be zero since _last_ is initialized to -1 in the > [constructor|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L222]: > {code:java} > public Timer(int size) { > ... > last = -1; > } > {code} > Thus, a divide by zero problem can happen. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15914) Possible Resource Leak in org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap
[ https://issues.apache.org/jira/browse/HDFS-15914?focusedWorklogId=599548=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599548 ] ASF GitHub Bot logged work on HDFS-15914: - Author: ASF GitHub Bot Created on: 20/May/21 00:47 Start Date: 20/May/21 00:47 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2809: URL: https://github.com/apache/hadoop/pull/2809#issuecomment-844595720 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 10s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 5s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 410m 44s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2809/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 498m 21s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2809/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2809 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 51de12c252fb 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Work logged] (HDFS-13522) Support observer node from Router-Based Federation
[ https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=599538=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599538 ] ASF GitHub Bot logged work on HDFS-13522: - Author: ASF GitHub Bot Created on: 20/May/21 00:03 Start Date: 20/May/21 00:03 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3005: URL: https://github.com/apache/hadoop/pull/3005#issuecomment-844578560 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 12 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 28s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 25s | | trunk passed | | +1 :green_heart: | compile | 20m 50s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 12s | | trunk passed | | +1 :green_heart: | javadoc | 4m 0s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 5m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 9m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 31s | | the patch passed | | +1 :green_heart: | compile | 20m 12s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 12s | | the patch passed | | +1 :green_heart: | compile | 18m 10s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 48s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/7/artifact/out/results-checkstyle-root.txt) | root: The patch generated 10 new + 905 unchanged - 1 fixed = 915 total (was 906) | | +1 :green_heart: | mvnsite | 5m 9s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 3m 57s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 5m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 10m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 7s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 37s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 363m 29s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 1m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch failed. | | +1 :green_heart: | asflicense | 1m 6s | | The patch does not generate ASF License warnings. | | | | 594m 53s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | |
[jira] [Assigned] (HDFS-15599) RBF: Add API to expose resolved destinations (namespace) in Router
[ https://issues.apache.org/jira/browse/HDFS-15599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li reassigned HDFS-15599: - Assignee: Qifan Shi > RBF: Add API to expose resolved destinations (namespace) in Router > -- > > Key: HDFS-15599 > URL: https://issues.apache.org/jira/browse/HDFS-15599 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Fengnan Li >Assignee: Qifan Shi >Priority: Major > > We have seen quite often requests like where a path in Router is actually > pointed. Two main use cases are: > 1) Calculate the HDFS capacity usage allocation of all Hive tables, whose > have onboarded to Router. > 2) A failure prevention method for cross-cluster rename. First check the > source HDFS location and dest HDFS location, and then issue a distcp cmd if > possible to avoid the Exception. > Inside Router, the function getLocationsForPath does the work but it is > internal only and not visible to Clients. > RouterAdmin has getMountTableEntries but this is a cast of Mount table > without any resolving. > > We are proposing adding such an API, and there are two ways: > 1) Adding this API in RouterRpcServer, which requires the change in > ClientNameNodeProtocol to include this new API. > 2) Adding this API in RouterAdminServer, which requires the a protocol > between Client and the admin server. > > There is one existing resolvePath in FileSystem which can be used to > implement this call from client side. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15599) RBF: Add API to expose resolved destinations (namespace) in Router
[ https://issues.apache.org/jira/browse/HDFS-15599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li reassigned HDFS-15599: - Assignee: (was: Fengnan Li) > RBF: Add API to expose resolved destinations (namespace) in Router > -- > > Key: HDFS-15599 > URL: https://issues.apache.org/jira/browse/HDFS-15599 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Fengnan Li >Priority: Major > > We have seen quite often requests like where a path in Router is actually > pointed. Two main use cases are: > 1) Calculate the HDFS capacity usage allocation of all Hive tables, whose > have onboarded to Router. > 2) A failure prevention method for cross-cluster rename. First check the > source HDFS location and dest HDFS location, and then issue a distcp cmd if > possible to avoid the Exception. > Inside Router, the function getLocationsForPath does the work but it is > internal only and not visible to Clients. > RouterAdmin has getMountTableEntries but this is a cast of Mount table > without any resolving. > > We are proposing adding such an API, and there are two ways: > 1) Adding this API in RouterRpcServer, which requires the change in > ClientNameNodeProtocol to include this new API. > 2) Adding this API in RouterAdminServer, which requires the a protocol > between Client and the admin server. > > There is one existing resolvePath in FileSystem which can be used to > implement this call from client side. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15675) TestRouterRpcMultiDestination#testErasureCoding fails on trunk
[ https://issues.apache.org/jira/browse/HDFS-15675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li reassigned HDFS-15675: - Assignee: Fengnan Li > TestRouterRpcMultiDestination#testErasureCoding fails on trunk > -- > > Key: HDFS-15675 > URL: https://issues.apache.org/jira/browse/HDFS-15675 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Fengnan Li >Priority: Major > > qbt report (Nov 8, 2020, 11:28 AM) shows failures in testErasureCoding -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15857) Space is missed in the print result of ECAdmin.RemoveECPolicyCommand
[ https://issues.apache.org/jira/browse/HDFS-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li reassigned HDFS-15857: - Assignee: Fengnan Li > Space is missed in the print result of ECAdmin.RemoveECPolicyCommand > > > Key: HDFS-15857 > URL: https://issues.apache.org/jira/browse/HDFS-15857 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ec >Affects Versions: 3.4.0 >Reporter: Shiyou xin >Assignee: Fengnan Li >Priority: Minor > > System.*_out_*.println("Erasure coding policy " + ecPolicyName + > "is removed"); > > It will be better if insert a space between ecPolicyName and "is removed" > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16031) Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
[ https://issues.apache.org/jira/browse/HDFS-16031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16031: -- Labels: pull-request-available (was: ) > Possible Resource Leak in > org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap > - > > Key: HDFS-16031 > URL: https://issues.apache.org/jira/browse/HDFS-16031 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Narges Shadab >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > We notice a possible resource leak in > [getCompressedAliasMap|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java#L320]. > If {{finish()}} at line 334 throws an IOException, then {{tOut, gzOut}} and > {{bOut}} remain open since the exception isn't caught locally, and there is > no way for any caller to close them. > I've submitted a pull request to fix it. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16031) Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
[ https://issues.apache.org/jira/browse/HDFS-16031?focusedWorklogId=599397=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599397 ] ASF GitHub Bot logged work on HDFS-16031: - Author: ASF GitHub Bot Created on: 19/May/21 18:41 Start Date: 19/May/21 18:41 Worklog Time Spent: 10m Work Description: Nargeshdb opened a new pull request #3027: URL: https://github.com/apache/hadoop/pull/3027 This PR fixes the issue mentioned [here](https://issues.apache.org/jira/browse/HDFS-16031). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 599397) Remaining Estimate: 0h Time Spent: 10m > Possible Resource Leak in > org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap > - > > Key: HDFS-16031 > URL: https://issues.apache.org/jira/browse/HDFS-16031 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Narges Shadab >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > We notice a possible resource leak in > [getCompressedAliasMap|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java#L320]. > If {{finish()}} at line 334 throws an IOException, then {{tOut, gzOut}} and > {{bOut}} remain open since the exception isn't caught locally, and there is > no way for any caller to close them. > I've submitted a pull request to fix it. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16031) Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap
Narges Shadab created HDFS-16031: Summary: Possible Resource Leak in org.apache.hadoop.hdfs.server.aliasmap#InMemoryAliasMap Key: HDFS-16031 URL: https://issues.apache.org/jira/browse/HDFS-16031 Project: Hadoop HDFS Issue Type: Bug Reporter: Narges Shadab We notice a possible resource leak in [getCompressedAliasMap|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java#L320]. If {{finish()}} at line 334 throws an IOException, then {{tOut, gzOut}} and {{bOut}} remain open since the exception isn't caught locally, and there is no way for any caller to close them. I've submitted a pull request to fix it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16030) OBSFileSystem should support Snapshot operations
[ https://issues.apache.org/jira/browse/HDFS-16030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bhavik Patel updated HDFS-16030: Description: OBSFileSystem should support Snapshot operation like other files system. CC: [~zhongjun] [~iwasakims] [~pbacsko] was: OBSFileSystem should support Snapshot operation like other files system. CC: [~zhongjun] > OBSFileSystem should support Snapshot operations > > > Key: HDFS-16030 > URL: https://issues.apache.org/jira/browse/HDFS-16030 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bhavik Patel >Priority: Major > > OBSFileSystem should support Snapshot operation like other files system. > CC: [~zhongjun] [~iwasakims] [~pbacsko] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15757) RBF: Improving Router Connection Management
[ https://issues.apache.org/jira/browse/HDFS-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri resolved HDFS-15757. Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > RBF: Improving Router Connection Management > --- > > Key: HDFS-15757 > URL: https://issues.apache.org/jira/browse/HDFS-15757 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: RBF_ Improving Router Connection Management_v2.pdf, RBF_ > Improving Router Connection Management_v3.pdf, RBF_ Router Connection > Management.pdf > > Time Spent: 4h 10m > Remaining Estimate: 0h > > We have seen high number of connections from Router to namenodes, leaving > namenodes unstable. > This ticket is trying to reduce connections through some changes. Please take > a look at the design and leave comments. > Thanks! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15757) RBF: Improving Router Connection Management
[ https://issues.apache.org/jira/browse/HDFS-15757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347805#comment-17347805 ] Íñigo Goiri commented on HDFS-15757: Thanks [~fengnanli] for the improvement and [~hexiaoqiao] for the review. Merged PR 2651to trunk. > RBF: Improving Router Connection Management > --- > > Key: HDFS-15757 > URL: https://issues.apache.org/jira/browse/HDFS-15757 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Labels: pull-request-available > Attachments: RBF_ Improving Router Connection Management_v2.pdf, RBF_ > Improving Router Connection Management_v3.pdf, RBF_ Router Connection > Management.pdf > > Time Spent: 4h 10m > Remaining Estimate: 0h > > We have seen high number of connections from Router to namenodes, leaving > namenodes unstable. > This ticket is trying to reduce connections through some changes. Please take > a look at the design and leave comments. > Thanks! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15757) RBF: Improving Router Connection Management
[ https://issues.apache.org/jira/browse/HDFS-15757?focusedWorklogId=599368=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599368 ] ASF GitHub Bot logged work on HDFS-15757: - Author: ASF GitHub Bot Created on: 19/May/21 17:53 Start Date: 19/May/21 17:53 Worklog Time Spent: 10m Work Description: goiri merged pull request #2651: URL: https://github.com/apache/hadoop/pull/2651 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 599368) Time Spent: 4h 10m (was: 4h) > RBF: Improving Router Connection Management > --- > > Key: HDFS-15757 > URL: https://issues.apache.org/jira/browse/HDFS-15757 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Labels: pull-request-available > Attachments: RBF_ Improving Router Connection Management_v2.pdf, RBF_ > Improving Router Connection Management_v3.pdf, RBF_ Router Connection > Management.pdf > > Time Spent: 4h 10m > Remaining Estimate: 0h > > We have seen high number of connections from Router to namenodes, leaving > namenodes unstable. > This ticket is trying to reduce connections through some changes. Please take > a look at the design and leave comments. > Thanks! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16030) OBSFileSystem should support Snapshot operations
Bhavik Patel created HDFS-16030: --- Summary: OBSFileSystem should support Snapshot operations Key: HDFS-16030 URL: https://issues.apache.org/jira/browse/HDFS-16030 Project: Hadoop HDFS Issue Type: Improvement Reporter: Bhavik Patel OBSFileSystem should support Snapshot operation like other files system. CC: [~zhongjun] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15915) Race condition with async edits logging due to updating txId outside of the namesystem log
[ https://issues.apache.org/jira/browse/HDFS-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347691#comment-17347691 ] Hadoop QA commented on HDFS-15915: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 41s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 52s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 19m 20s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 25m 14s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 3m 23s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 28s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/607/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 502 unchanged - 1 fixed = 503 total (was 503) {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 19s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/607/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 486 unchanged - 1 fixed = 487 total (was 487) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green}{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 204 unchanged - 1 fixed = 204 total (was 205) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} |
[jira] [Commented] (HDFS-15294) Federation balance tool
[ https://issues.apache.org/jira/browse/HDFS-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347640#comment-17347640 ] Jinglun commented on HDFS-15294: {quote}if source directory be writting all the time, is it means Federation balance will never exit? {quote} Hi [~zhengchenyu], nice comments. HDFS-15640 has introduced a new option: 'diffThreshold'. If the diff entries size is no greater than this threshold and the open files check is satisfied(no open files or force close all open files), the fedBalance will go to the final round of distcp. By specifying diff threshold we can make the federation balance job exit. Does it work for your situation ? I'll take a review of HDFS-15750. > Federation balance tool > --- > > Key: HDFS-15294 > URL: https://issues.apache.org/jira/browse/HDFS-15294 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 3.4.0 > > Attachments: BalanceProcedureScheduler.png, HDFS-15294.001.patch, > HDFS-15294.002.patch, HDFS-15294.003.patch, HDFS-15294.003.reupload.patch, > HDFS-15294.004.patch, HDFS-15294.005.patch, HDFS-15294.006.patch, > HDFS-15294.007.patch, distcp-balance.pdf, distcp-balance.v2.pdf > > > This jira introduces a new HDFS federation balance tool to balance data > across different federation namespaces. It uses Distcp to copy data from the > source path to the target path. > The process is: > 1. Use distcp and snapshot diff to sync data between src and dst until they > are the same. > 2. Update mount table in Router if we specified RBF mode. > 3. Deal with src data, move to trash, delete or skip them. > The design of fedbalance tool comes from the discussion in HDFS-15087. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15294) Federation balance tool
[ https://issues.apache.org/jira/browse/HDFS-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347640#comment-17347640 ] Jinglun edited comment on HDFS-15294 at 5/19/21, 12:40 PM: --- {quote}if source directory be writting all the time, is it means Federation balance will never exit? {quote} Hi [~zhengchenyu], nice comments. HDFS-15640 has introduced an option: 'diffThreshold'. If the diff entries size is no greater than this threshold and the open files check is satisfied(no open files or force close all open files), the fedBalance will go to the final round of distcp. By specifying diff threshold we can make the federation balance job exit. Does it work for your situation ? I'll take a review of HDFS-15750. was (Author: lijinglun): {quote}if source directory be writting all the time, is it means Federation balance will never exit? {quote} Hi [~zhengchenyu], nice comments. HDFS-15640 has introduced a new option: 'diffThreshold'. If the diff entries size is no greater than this threshold and the open files check is satisfied(no open files or force close all open files), the fedBalance will go to the final round of distcp. By specifying diff threshold we can make the federation balance job exit. Does it work for your situation ? I'll take a review of HDFS-15750. > Federation balance tool > --- > > Key: HDFS-15294 > URL: https://issues.apache.org/jira/browse/HDFS-15294 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 3.4.0 > > Attachments: BalanceProcedureScheduler.png, HDFS-15294.001.patch, > HDFS-15294.002.patch, HDFS-15294.003.patch, HDFS-15294.003.reupload.patch, > HDFS-15294.004.patch, HDFS-15294.005.patch, HDFS-15294.006.patch, > HDFS-15294.007.patch, distcp-balance.pdf, distcp-balance.v2.pdf > > > This jira introduces a new HDFS federation balance tool to balance data > across different federation namespaces. It uses Distcp to copy data from the > source path to the target path. > The process is: > 1. Use distcp and snapshot diff to sync data between src and dst until they > are the same. > 2. Update mount table in Router if we specified RBF mode. > 3. Deal with src data, move to trash, delete or skip them. > The design of fedbalance tool comes from the discussion in HDFS-15087. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16029) Divide by zero bug in InstrumentationService.java
[ https://issues.apache.org/jira/browse/HDFS-16029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiyuan GUO updated HDFS-16029: -- Description: In the file _lib/service/instrumentation/InstrumentationService.java,_ the method _Timer.getValues_ has the following [code|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L236]: {code:java} long[] getValues() { .. int limit = (full) ? size : (last + 1); .. values[AVG_TOTAL] = values[AVG_TOTAL] / limit; } {code} The variable _limit_ is used as a divisor. However, its value may be equal to _last + 1,_ which can be zero since _last_ is initialized to -1 in the [constructor|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L222]: {code:java} public Timer(int size) { ... last = -1; } {code} Thus, a divide by zero problem can happen. was: In the file _lib/service/instrumentation/InstrumentationService.java,_ the method _Timer.getValues_ has the following [code|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L236]: {code:java} long[] getValues() { .. int limit = (full) ? size : (last + 1); .. values[AVG_TOTAL] = values[AVG_TOTAL] / limit; } {code} The variable _limit_ is used as a divisor. However, its value may be equal to _last + 1,_ which can be zero since _last_ is __ initialized to -1 in the [constructor|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L222]: {code:java} public Timer(int size) { ... last = -1; } {code} Thus, a divide by zero problem can happen > Divide by zero bug in InstrumentationService.java > - > > Key: HDFS-16029 > URL: https://issues.apache.org/jira/browse/HDFS-16029 > Project: Hadoop HDFS > Issue Type: Bug > Components: security >Reporter: Yiyuan GUO >Priority: Major > Labels: easy-fix, security > > In the file _lib/service/instrumentation/InstrumentationService.java,_ the > method > _Timer.getValues_ has the following > [code|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L236]: > {code:java} > long[] getValues() { > .. > int limit = (full) ? size : (last + 1); > .. > values[AVG_TOTAL] = values[AVG_TOTAL] / limit; > } > {code} > The variable _limit_ is used as a divisor. However, its value may be equal to > _last + 1,_ which can be zero since _last_ is initialized to -1 in the > [constructor|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L222]: > {code:java} > public Timer(int size) { > ... > last = -1; > } > {code} > Thus, a divide by zero problem can happen. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16029) Divide by zero bug in InstrumentationService.java
Yiyuan GUO created HDFS-16029: - Summary: Divide by zero bug in InstrumentationService.java Key: HDFS-16029 URL: https://issues.apache.org/jira/browse/HDFS-16029 Project: Hadoop HDFS Issue Type: Bug Components: security Reporter: Yiyuan GUO In the file _lib/service/instrumentation/InstrumentationService.java,_ the method _Timer.getValues_ has the following [code|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L236]: {code:java} long[] getValues() { .. int limit = (full) ? size : (last + 1); .. values[AVG_TOTAL] = values[AVG_TOTAL] / limit; } {code} The variable _limit_ is used as a divisor. However, its value may be equal to _last + 1,_ which can be zero since _last_ is __ initialized to -1 in the [constructor|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/instrumentation/InstrumentationService.java#L222]: {code:java} public Timer(int size) { ... last = -1; } {code} Thus, a divide by zero problem can happen -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations
[ https://issues.apache.org/jira/browse/HDFS-16024?focusedWorklogId=599145=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599145 ] ASF GitHub Bot logged work on HDFS-16024: - Author: ASF GitHub Bot Created on: 19/May/21 11:08 Start Date: 19/May/21 11:08 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #3009: URL: https://github.com/apache/hadoop/pull/3009#issuecomment-843997354 @zhuxiangyi Thanks, i think it is reasonable, go ahead! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 599145) Time Spent: 4.5h (was: 4h 20m) > RBF: Rename data to the Trash should be based on src locations > -- > > Key: HDFS-16024 > URL: https://issues.apache.org/jira/browse/HDFS-16024 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.4.0 >Reporter: zhu >Assignee: zhu >Priority: Major > Labels: pull-request-available > Time Spent: 4.5h > Remaining Estimate: 0h > > 1.When deleting data to the Trash without configuring a mount point for the > Trash, the Router should recognize and move the data to the Trash > 2.When the user’s trash can is configured with a mount point and is different > from the NS of the deleted directory, the router should identify and move the > data to the trash can of the current user of src > The same is true for using ViewFs mount points, I think we should be > consistent with it -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations
[ https://issues.apache.org/jira/browse/HDFS-16024?focusedWorklogId=599127=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599127 ] ASF GitHub Bot logged work on HDFS-16024: - Author: ASF GitHub Bot Created on: 19/May/21 10:39 Start Date: 19/May/21 10:39 Worklog Time Spent: 10m Work Description: zhuxiangyi commented on pull request #3009: URL: https://github.com/apache/hadoop/pull/3009#issuecomment-843976245 @ferhui Thank you very much for your discussion. The questions you raised have helped me a lot. The above scheme does not seem to be perfect. For example, the first problem in Jira cannot be solved, and the creation of trash folders in other NS cannot be solved. **I think it would be better to do this:** Add the processing logic for the Trash path in the MountTableResolver#getDestinationForPath method. If it is the Trash path, subtract baseTrashPath to get a new Path (/user/userA/.Trash/Current/home/userA/test ->/home/userA/tes ), use the new Path to parse to get remoteLocations, then use the Ns of remoteLocations and the original Path to merge into a new remoteLocations return.This can ensure that the Ns used by all Rpcs are the same as Src. Looking forward to your comment again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 599127) Time Spent: 4h 20m (was: 4h 10m) > RBF: Rename data to the Trash should be based on src locations > -- > > Key: HDFS-16024 > URL: https://issues.apache.org/jira/browse/HDFS-16024 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.4.0 >Reporter: zhu >Assignee: zhu >Priority: Major > Labels: pull-request-available > Time Spent: 4h 20m > Remaining Estimate: 0h > > 1.When deleting data to the Trash without configuring a mount point for the > Trash, the Router should recognize and move the data to the Trash > 2.When the user’s trash can is configured with a mount point and is different > from the NS of the deleted directory, the router should identify and move the > data to the trash can of the current user of src > The same is true for using ViewFs mount points, I think we should be > consistent with it -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15294) Federation balance tool
[ https://issues.apache.org/jira/browse/HDFS-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347493#comment-17347493 ] zhengchenyu commented on HDFS-15294: Thanks for this great work! But I have some question, if source directory be writting all the time, is it means Federation balance will never exit? In our cluster, we have tool like this. We use "distcp diff snapshot" firstly, but gave up it. Then I use multi dest nameservice mountable, write to the dst nameservice. Then copy the source data to dst. Then I have only one issue: keep data consistent , so I submit HDFS-15750. > Federation balance tool > --- > > Key: HDFS-15294 > URL: https://issues.apache.org/jira/browse/HDFS-15294 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 3.4.0 > > Attachments: BalanceProcedureScheduler.png, HDFS-15294.001.patch, > HDFS-15294.002.patch, HDFS-15294.003.patch, HDFS-15294.003.reupload.patch, > HDFS-15294.004.patch, HDFS-15294.005.patch, HDFS-15294.006.patch, > HDFS-15294.007.patch, distcp-balance.pdf, distcp-balance.v2.pdf > > > This jira introduces a new HDFS federation balance tool to balance data > across different federation namespaces. It uses Distcp to copy data from the > source path to the target path. > The process is: > 1. Use distcp and snapshot diff to sync data between src and dst until they > are the same. > 2. Update mount table in Router if we specified RBF mode. > 3. Deal with src data, move to trash, delete or skip them. > The design of fedbalance tool comes from the discussion in HDFS-15087. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16028) Add a configuration item for special trash dir
[ https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347454#comment-17347454 ] Hadoop QA commented on HDFS-16028: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 16s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 32s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 23s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 43s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 20s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 23m 30s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 35s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 33s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 33s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 15s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 15s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 57s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s{color} | {color:green}{color} | {color:green} the patch passed
[jira] [Work logged] (HDFS-16028) Add a configuration item for special trash dir
[ https://issues.apache.org/jira/browse/HDFS-16028?focusedWorklogId=599085=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599085 ] ASF GitHub Bot logged work on HDFS-16028: - Author: ASF GitHub Bot Created on: 19/May/21 09:22 Start Date: 19/May/21 09:22 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3023: URL: https://github.com/apache/hadoop/pull/3023#issuecomment-843918226 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 36s | | trunk passed | | +1 :green_heart: | compile | 20m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 11s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 39s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 55s | | the patch passed | | +1 :green_heart: | compile | 21m 10s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 21m 10s | | the patch passed | | +1 :green_heart: | compile | 20m 11s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 20m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 6s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 37s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 29s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 56s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 180m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3023 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 696b5a4cd06a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c8ad9ad73c298335ccda13ff603523e31d7fdb28 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/2/testReport/ | | Max. process+thread count | 3158 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[jira] [Assigned] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
[ https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun reassigned HDFS-13671: -- Assignee: Haibin Huang > Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet > -- > > Key: HDFS-13671 > URL: https://issues.apache.org/jira/browse/HDFS-13671 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.0.3 >Reporter: Yiqun Lin >Assignee: Haibin Huang >Priority: Major > > NameNode hung when deleting large files/blocks. The stack info: > {code} > "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 > tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474) > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849) > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > {code} > In the current deletion logic in NameNode, there are mainly two steps: > * Collect INodes and all blocks to be deleted, then delete INodes. > * Remove blocks chunk by chunk in a loop. > Actually the first step should be a more expensive operation and will takes > more time. However, now we always see NN hangs during the remove block > operation. > Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a > better performance in dealing FBR/IBRs. But compared with early > implementation in remove-block logic, {{FoldedTreeSet}} seems more slower > since It will take additional time to balance tree node. When there are large > block to be removed/deleted, it looks bad. > For the get type operations in {{DatanodeStorageInfo}}, we only provide the > {{getBlockIterator}} to return blocks iterator and no other get operation > with specified block. Still we need to use {{FoldedTreeSet}} in > {{DatanodeStorageInfo}}? As we know {{FoldedTreeSet}} is benefit for Get not > Update. Maybe we can revert this to the early implementation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-16028) Add a configuration item for special trash dir
[ https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347406#comment-17347406 ] Qi Zhu edited comment on HDFS-16028 at 5/19/21, 8:24 AM: - Thanks [~zhengzhuobinzzb] for patch. 1. We'd better add an enable flag to trigger this besides null check. 2. We should also add the new conf in core-default.xml. 3. We should add some doc for getTrashHome method consistent with getHomeDirectory. cc [~hexiaoqiao] [~ayushtkn] [~weichiu] [~sodonnell] Could you help review this when you are free? was (Author: zhuqi): Thanks [~zhengzhuobinzzb] for patch. We should also add the new conf in core-default.xml. And we should add some doc for getTrashHome method consistent with getHomeDirectory. cc [~hexiaoqiao] [~ayushtkn] [~weichiu] [~sodonnell] Could you help review this when you are free? > Add a configuration item for special trash dir > -- > > Key: HDFS-16028 > URL: https://issues.apache.org/jira/browse/HDFS-16028 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: zhuobin zheng >Assignee: zhuobin zheng >Priority: Minor > Labels: pull-request-available > Attachments: HDFS-16028.001.patch, HDFS-16028.002.patch > > Time Spent: 20m > Remaining Estimate: 0h > > In some situation, We don't want to put trash in homedir. like: > # Immediately reduce the quota occupation of the home directory > # In RBF: We want to make the directory mounting strategy of trash > different from the home directory and we don't want mount it per user > This patch add the option "fs.trash.dir" to special the trash dir( > ${fs.trash.dir}/$USER/.Trash) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16028) Add a configuration item for special trash dir
[ https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347406#comment-17347406 ] Qi Zhu commented on HDFS-16028: --- Thanks [~zhengzhuobinzzb] for patch. We should also add the new conf in core-default.xml. And we should add some doc for getTrashHome method consistent with getHomeDirectory. cc [~hexiaoqiao] [~ayushtkn] [~weichiu] [~sodonnell] Could you help review this when you are free? > Add a configuration item for special trash dir > -- > > Key: HDFS-16028 > URL: https://issues.apache.org/jira/browse/HDFS-16028 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: zhuobin zheng >Assignee: zhuobin zheng >Priority: Minor > Labels: pull-request-available > Attachments: HDFS-16028.001.patch, HDFS-16028.002.patch > > Time Spent: 20m > Remaining Estimate: 0h > > In some situation, We don't want to put trash in homedir. like: > # Immediately reduce the quota occupation of the home directory > # In RBF: We want to make the directory mounting strategy of trash > different from the home directory and we don't want mount it per user > This patch add the option "fs.trash.dir" to special the trash dir( > ${fs.trash.dir}/$USER/.Trash) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15757) RBF: Improving Router Connection Management
[ https://issues.apache.org/jira/browse/HDFS-15757?focusedWorklogId=599047=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599047 ] ASF GitHub Bot logged work on HDFS-15757: - Author: ASF GitHub Bot Created on: 19/May/21 07:29 Start Date: 19/May/21 07:29 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2651: URL: https://github.com/apache/hadoop/pull/2651#issuecomment-843825343 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 8s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 57s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 30s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 23m 33s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 107m 14s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2651/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2651 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux cc14225ae987 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 95fe6e9ac2db2da87dfbb78870bd57a3f67ebc16 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2651/2/testReport/ | | Max. process+thread count | 2546 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2651/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[jira] [Work logged] (HDFS-15757) RBF: Improving Router Connection Management
[ https://issues.apache.org/jira/browse/HDFS-15757?focusedWorklogId=599042=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599042 ] ASF GitHub Bot logged work on HDFS-15757: - Author: ASF GitHub Bot Created on: 19/May/21 07:16 Start Date: 19/May/21 07:16 Worklog Time Spent: 10m Work Description: Hexiaoqiao commented on pull request #2651: URL: https://github.com/apache/hadoop/pull/2651#issuecomment-843815151 Thanks @fengnanli for your work. It is safe to checkin for me now. @goiri What do you think about? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 599042) Time Spent: 3h 50m (was: 3h 40m) > RBF: Improving Router Connection Management > --- > > Key: HDFS-15757 > URL: https://issues.apache.org/jira/browse/HDFS-15757 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Labels: pull-request-available > Attachments: RBF_ Improving Router Connection Management_v2.pdf, RBF_ > Improving Router Connection Management_v3.pdf, RBF_ Router Connection > Management.pdf > > Time Spent: 3h 50m > Remaining Estimate: 0h > > We have seen high number of connections from Router to namenodes, leaving > namenodes unstable. > This ticket is trying to reduce connections through some changes. Please take > a look at the design and leave comments. > Thanks! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13671) Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
[ https://issues.apache.org/jira/browse/HDFS-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347348#comment-17347348 ] Haibin Huang commented on HDFS-13671: - Thanks [~ferhui] and [~LiJinglun] involving me here, i will submit a patch later. > Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet > -- > > Key: HDFS-13671 > URL: https://issues.apache.org/jira/browse/HDFS-13671 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.0.3 >Reporter: Yiqun Lin >Priority: Major > > NameNode hung when deleting large files/blocks. The stack info: > {code} > "IPC Server handler 4 on 8020" #87 daemon prio=5 os_prio=0 > tid=0x7fb505b27800 nid=0x94c3 runnable [0x7fa861361000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.compare(FoldedTreeSet.java:474) > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.removeAndGet(FoldedTreeSet.java:849) > at > org.apache.hadoop.hdfs.util.FoldedTreeSet.remove(FoldedTreeSet.java:911) > at > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo.removeBlock(DatanodeStorageInfo.java:252) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:194) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlocksMap.removeBlock(BlocksMap.java:108) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlockFromMap(BlockManager.java:3813) > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.removeBlock(BlockManager.java:3617) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:4270) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4244) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4180) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4164) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:871) > at > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:311) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:625) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > {code} > In the current deletion logic in NameNode, there are mainly two steps: > * Collect INodes and all blocks to be deleted, then delete INodes. > * Remove blocks chunk by chunk in a loop. > Actually the first step should be a more expensive operation and will takes > more time. However, now we always see NN hangs during the remove block > operation. > Looking into this, we introduced a new structure {{FoldedTreeSet}} to have a > better performance in dealing FBR/IBRs. But compared with early > implementation in remove-block logic, {{FoldedTreeSet}} seems more slower > since It will take additional time to balance tree node. When there are large > block to be removed/deleted, it looks bad. > For the get type operations in {{DatanodeStorageInfo}}, we only provide the > {{getBlockIterator}} to return blocks iterator and no other get operation > with specified block. Still we need to use {{FoldedTreeSet}} in > {{DatanodeStorageInfo}}? As we know {{FoldedTreeSet}} is benefit for Get not > Update. Maybe we can revert this to the early implementation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16026) Restore cross platform mkstemp
[ https://issues.apache.org/jira/browse/HDFS-16026?focusedWorklogId=599035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599035 ] ASF GitHub Bot logged work on HDFS-16026: - Author: ASF GitHub Bot Created on: 19/May/21 06:50 Start Date: 19/May/21 06:50 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3014: URL: https://github.com/apache/hadoop/pull/3014#issuecomment-843797586 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| _ Prechecks _ | | -1 :x: | maven | 0m 3s | | ERROR: maven was not available. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3014 | | Optional Tests | dupname asflicense codespell hadolint shellcheck shelldocs compile cc mvnsite javac unit golang | | uname | Linux asf912.gq1.ygridcore.net 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-3014/src/dev-support/bin/hadoop.sh | | git revision | trunk / 0e45c777a70157eda03bc8249382b0bad7b51401 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/17/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 599035) Time Spent: 3h 10m (was: 3h) > Restore cross platform mkstemp > -- > > Key: HDFS-16026 > URL: https://issues.apache.org/jira/browse/HDFS-16026 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++ >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15915) Race condition with async edits logging due to updating txId outside of the namesystem log
[ https://issues.apache.org/jira/browse/HDFS-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347336#comment-17347336 ] Konstantin Shvachko commented on HDFS-15915: Updated the patch per [~virajith]'s suggestions. Thanks. # The default implementation of {{EditLogOutputStream.getLastJournalledTxId()}} returns {{INVALID_TXID}} rather than {{0}}. # Changed {{beginTransaction()}} type to void. ??This change forces the txid to be assigned when the operation takes place under the FSN lock.?? Exactly right. The advantage of this in non-Observer case is verifiability and proper enforcement. When you merely rely on placing operations into the queue in the right order you cannot verify that, such as write unit tests or set asserts. And it is hard to detect a bug if there is one in this very multi-threaded code. With the patch the txId is generated when the operation is queued, so I could add asserts to ensure operations are queued and synced in the order they were applied on the active NN. > Race condition with async edits logging due to updating txId outside of the > namesystem log > -- > > Key: HDFS-15915 > URL: https://issues.apache.org/jira/browse/HDFS-15915 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-15915-01.patch, HDFS-15915-02.patch, > HDFS-15915-03.patch, HDFS-15915-04.patch, testMkdirsRace.patch > > > {{FSEditLogAsync}} creates an {{FSEditLogOp}} and populates its fields inside > {{FSNamesystem.writeLock}}. But one essential field the transaction id of the > edits op remains unset until the time when the operation is scheduled for > synching. At that time {{beginTransaction()}} will set the the > {{FSEditLogOp.txid}} and increment the global transaction count. On busy > NameNode this event can fall outside the write lock. > This causes problems for Observer reads. It also can potentially reshuffle > transactions and Standby will apply them in a wrong order. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15915) Race condition with async edits logging due to updating txId outside of the namesystem log
[ https://issues.apache.org/jira/browse/HDFS-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-15915: --- Attachment: HDFS-15915-04.patch > Race condition with async edits logging due to updating txId outside of the > namesystem log > -- > > Key: HDFS-15915 > URL: https://issues.apache.org/jira/browse/HDFS-15915 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs, namenode >Reporter: Konstantin Shvachko >Assignee: Konstantin Shvachko >Priority: Major > Attachments: HDFS-15915-01.patch, HDFS-15915-02.patch, > HDFS-15915-03.patch, HDFS-15915-04.patch, testMkdirsRace.patch > > > {{FSEditLogAsync}} creates an {{FSEditLogOp}} and populates its fields inside > {{FSNamesystem.writeLock}}. But one essential field the transaction id of the > edits op remains unset until the time when the operation is scheduled for > synching. At that time {{beginTransaction()}} will set the the > {{FSEditLogOp.txid}} and increment the global transaction count. On busy > NameNode this event can fall outside the write lock. > This causes problems for Observer reads. It also can potentially reshuffle > transactions and Standby will apply them in a wrong order. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16028) Add a configuration item for special trash dir
[ https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347322#comment-17347322 ] zhuobin zheng commented on HDFS-16028: -- Submit 002.patch to clean up some useless code accidentally added > Add a configuration item for special trash dir > -- > > Key: HDFS-16028 > URL: https://issues.apache.org/jira/browse/HDFS-16028 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: zhuobin zheng >Assignee: zhuobin zheng >Priority: Minor > Labels: pull-request-available > Attachments: HDFS-16028.001.patch, HDFS-16028.002.patch > > Time Spent: 20m > Remaining Estimate: 0h > > In some situation, We don't want to put trash in homedir. like: > # Immediately reduce the quota occupation of the home directory > # In RBF: We want to make the directory mounting strategy of trash > different from the home directory and we don't want mount it per user > This patch add the option "fs.trash.dir" to special the trash dir( > ${fs.trash.dir}/$USER/.Trash) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16028) Add a configuration item for special trash dir
[ https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuobin zheng updated HDFS-16028: - Attachment: HDFS-16028.002.patch Status: Patch Available (was: Open) > Add a configuration item for special trash dir > -- > > Key: HDFS-16028 > URL: https://issues.apache.org/jira/browse/HDFS-16028 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: zhuobin zheng >Assignee: zhuobin zheng >Priority: Minor > Labels: pull-request-available > Attachments: HDFS-16028.001.patch, HDFS-16028.002.patch > > Time Spent: 20m > Remaining Estimate: 0h > > In some situation, We don't want to put trash in homedir. like: > # Immediately reduce the quota occupation of the home directory > # In RBF: We want to make the directory mounting strategy of trash > different from the home directory and we don't want mount it per user > This patch add the option "fs.trash.dir" to special the trash dir( > ${fs.trash.dir}/$USER/.Trash) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16028) Add a configuration item for special trash dir
[ https://issues.apache.org/jira/browse/HDFS-16028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuobin zheng updated HDFS-16028: - Status: Open (was: Patch Available) > Add a configuration item for special trash dir > -- > > Key: HDFS-16028 > URL: https://issues.apache.org/jira/browse/HDFS-16028 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: zhuobin zheng >Assignee: zhuobin zheng >Priority: Minor > Labels: pull-request-available > Attachments: HDFS-16028.001.patch > > Time Spent: 20m > Remaining Estimate: 0h > > In some situation, We don't want to put trash in homedir. like: > # Immediately reduce the quota occupation of the home directory > # In RBF: We want to make the directory mounting strategy of trash > different from the home directory and we don't want mount it per user > This patch add the option "fs.trash.dir" to special the trash dir( > ${fs.trash.dir}/$USER/.Trash) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16026) Restore cross platform mkstemp
[ https://issues.apache.org/jira/browse/HDFS-16026?focusedWorklogId=599013=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-599013 ] ASF GitHub Bot logged work on HDFS-16026: - Author: ASF GitHub Bot Created on: 19/May/21 05:59 Start Date: 19/May/21 05:59 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3014: URL: https://github.com/apache/hadoop/pull/3014#issuecomment-843770528 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 34m 11s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shellcheck | 0m 0s | | Shellcheck was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +0 :ok: | hadolint | 0m 0s | | hadolint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 30s | | trunk passed | | -1 :x: | compile | 0m 22s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/16/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in trunk failed. | | +1 :green_heart: | mvnsite | 0m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 44m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | -1 :x: | compile | 0m 14s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/16/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | -1 :x: | cc | 0m 14s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/16/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | -1 :x: | golang | 0m 14s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/16/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | -1 :x: | javac | 0m 14s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/16/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 18s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/16/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 94m 37s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/16/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3014 | | Optional Tests | dupname asflicense codespell shellcheck shelldocs hadolint compile cc mvnsite javac unit golang | | uname | Linux dc3db27ec7e7 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3c7383ae9defc146f75dc9e22bf38fee78f50586 | | Default Java | Red Hat, Inc.-1.8.0_292-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/16/testReport/ | | Max. process+thread count | 543 (vs. ulimit of 5500) | | modules | C: