[jira] [Work logged] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters
[ https://issues.apache.org/jira/browse/HDFS-15923?focusedWorklogId=587648=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587648 ] ASF GitHub Bot logged work on HDFS-15923: - Author: ASF GitHub Bot Created on: 23/Apr/21 05:52 Start Date: 23/Apr/21 05:52 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2819: URL: https://github.com/apache/hadoop/pull/2819#issuecomment-825405745 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 3s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 2s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 17s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 29s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 22m 53s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 106m 23s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterWebHdfsMethods | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2819 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml spotbugs checkstyle | | uname | Linux fa8b8428199f 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f2e12ad5788c0724bea9c83f23d4323aaa990dfa | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results |
[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
[ https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=587624=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587624 ] ASF GitHub Bot logged work on HDFS-15790: - Author: ASF GitHub Bot Created on: 23/Apr/21 04:31 Start Date: 23/Apr/21 04:31 Worklog Time Spent: 10m Work Description: vinayakumarb commented on a change in pull request #2767: URL: https://github.com/apache/hadoop/pull/2767#discussion_r618930701 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java ## @@ -179,10 +281,41 @@ public void testProtoBufRpc2() throws Exception { MetricsRecordBuilder rpcDetailedMetrics = getMetrics(server.getRpcDetailedMetrics().name()); assertCounterGt("Echo2NumOps", 0L, rpcDetailedMetrics); + +if (testWithLegacy) { + testProtobufLegacy(); +} + } + + private void testProtobufLegacy() + throws IOException, com.google.protobuf.ServiceException { +TestRpcService2Legacy client = getClientLegacy(); + +// Test ping method +client.ping2(null, TestProtosLegacy.EmptyRequestProto.newBuilder().build()); + +// Test echo method +TestProtosLegacy.EchoResponseProto echoResponse = client.echo2(null, +TestProtosLegacy.EchoRequestProto.newBuilder().setMessage("hello") +.build()); +assertThat(echoResponse.getMessage()).isEqualTo("hello"); + +// Ensure RPC metrics are updated +MetricsRecordBuilder rpcMetrics = getMetrics(server.getRpcMetrics().name()); +assertCounterGt("RpcQueueTimeNumOps", 0L, rpcMetrics); +assertCounterGt("RpcProcessingTimeNumOps", 0L, rpcMetrics); + +MetricsRecordBuilder rpcDetailedMetrics = +getMetrics(server.getRpcDetailedMetrics().name()); +assertCounterGt("Echo2NumOps", 0L, rpcDetailedMetrics); } @Test (timeout=5000) public void testProtoBufRandomException() throws Exception { +if (testWithLegacy) { + //No test with legacy + return; +} Review comment: Thanks @aajisaka Will update soon. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587624) Time Spent: 2h (was: 1h 50m) > Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist > -- > > Key: HDFS-15790 > URL: https://issues.apache.org/jira/browse/HDFS-15790 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Critical > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive > project. This was not an awesome thing to do between minor versions in > regards to backwards compatibility for downstream projects. > Additionally, these two frameworks are not drop-in replacements, they have > some differences. Also, Protobuf 2 is not deprecated or anything so let us > have both protocols available at the same time. In Hadoop 4.x Protobuf 2 > support can be dropped. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-15989: Fix Version/s: 3.2.3 3.1.5 3.3.1 > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Time Spent: 7h 40m > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587623=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587623 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 23/Apr/21 04:27 Start Date: 23/Apr/21 04:27 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #2944: URL: https://github.com/apache/hadoop/pull/2944#issuecomment-825376607 Merged. Thanks for your contribution, @virajjasani! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587623) Time Spent: 7h 40m (was: 7.5h) > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 7h 40m > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587622=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587622 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 23/Apr/21 04:27 Start Date: 23/Apr/21 04:27 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #2944: URL: https://github.com/apache/hadoop/pull/2944 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587622) Time Spent: 7.5h (was: 7h 20m) > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 7.5h > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587621=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587621 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 23/Apr/21 04:23 Start Date: 23/Apr/21 04:23 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #2944: URL: https://github.com/apache/hadoop/pull/2944#issuecomment-825375289 Somehow Jenkins didn't work. But it just deleted the unused import from the last report of Jenkins and I think the latest commit is valid. I confirmed `mvn clean install -DskipTests` was succeeded in my local environment. +1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587621) Time Spent: 7h 20m (was: 7h 10m) > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 7h 20m > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15393) Review of PendingReconstructionBlocks
[ https://issues.apache.org/jira/browse/HDFS-15393?focusedWorklogId=587619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587619 ] ASF GitHub Bot logged work on HDFS-15393: - Author: ASF GitHub Bot Created on: 23/Apr/21 04:09 Start Date: 23/Apr/21 04:09 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2055: URL: https://github.com/apache/hadoop/pull/2055#issuecomment-825371322 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 23s | | https://github.com/apache/hadoop/pull/2055 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/2055 | | JIRA Issue | HDFS-15393 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2055/1/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587619) Remaining Estimate: 0h Time Spent: 10m > Review of PendingReconstructionBlocks > - > > Key: HDFS-15393 > URL: https://issues.apache.org/jira/browse/HDFS-15393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > I started looking at this class based on [HDFS-15351]. > * Uses {{java.sql.Time}} unnecessarily. Confusing since Java ships with time > formatters out of the box in JDK 8. I believe this will cause issues later > when trying to upgrade to JDK 9+ since SQL is a different module in Java. > * Remove code where appropriate > * Use Java Concurrent library for higher concurrent access to underlying map -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop
[ https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329967#comment-17329967 ] Mingliang Liu commented on HDFS-15624: -- Thanks [~weichiu]. I'm fine with your proposal. I hope 3.3.1 released can be unblocked soon. > Fix the SetQuotaByStorageTypeOp problem after updating hadoop > --- > > Key: HDFS-15624 > URL: https://issues.apache.org/jira/browse/HDFS-15624 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.4.0 >Reporter: YaYun Wang >Assignee: huangtianhua >Priority: Major > Labels: pull-request-available, release-blocker > Fix For: 3.4.0 > > Time Spent: 9h 40m > Remaining Estimate: 0h > > HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum > of StorageType. And, setting the quota by storageType depends on the > ordinal(), therefore, it may cause the setting of quota to be invalid after > upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15393) Review of PendingReconstructionBlocks
[ https://issues.apache.org/jira/browse/HDFS-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-15393: -- Labels: pull-request-available (was: ) > Review of PendingReconstructionBlocks > - > > Key: HDFS-15393 > URL: https://issues.apache.org/jira/browse/HDFS-15393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > I started looking at this class based on [HDFS-15351]. > * Uses {{java.sql.Time}} unnecessarily. Confusing since Java ships with time > formatters out of the box in JDK 8. I believe this will cause issues later > when trying to upgrade to JDK 9+ since SQL is a different module in Java. > * Remove code where appropriate > * Use Java Concurrent library for higher concurrent access to underlying map -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15393) Review of PendingReconstructionBlocks
[ https://issues.apache.org/jira/browse/HDFS-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329966#comment-17329966 ] Hadoop QA commented on HDFS-15393: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 23s{color} | | {color:red} https://github.com/apache/hadoop/pull/2055 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | GITHUB PR | https://github.com/apache/hadoop/pull/2055 | | JIRA Issue | HDFS-15393 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2055/1/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. > Review of PendingReconstructionBlocks > - > > Key: HDFS-15393 > URL: https://issues.apache.org/jira/browse/HDFS-15393 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > > I started looking at this class based on [HDFS-15351]. > * Uses {{java.sql.Time}} unnecessarily. Confusing since Java ships with time > formatters out of the box in JDK 8. I believe this will cause issues later > when trying to upgrade to JDK 9+ since SQL is a different module in Java. > * Remove code where appropriate > * Use Java Concurrent library for higher concurrent access to underlying map -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15967) Improve the log for Short Circuit Local Reads
[ https://issues.apache.org/jira/browse/HDFS-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329964#comment-17329964 ] Bhavik Patel commented on HDFS-15967: - [~tasanuma] Jenkins run successfully, can you please commit to the trunk and 3.3.1 branch > Improve the log for Short Circuit Local Reads > - > > Key: HDFS-15967 > URL: https://issues.apache.org/jira/browse/HDFS-15967 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bhavik Patel >Assignee: Bhavik Patel >Priority: Minor > Attachments: HDFS-15967.001.patch, HDFS-15967.002.patch > > > Improve the log for Short Circuit Local Reads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587618=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587618 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 23/Apr/21 03:58 Start Date: 23/Apr/21 03:58 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #2943: URL: https://github.com/apache/hadoop/pull/2943 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587618) Time Spent: 7h 10m (was: 7h) > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 7h 10m > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
[ https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=587617=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587617 ] ASF GitHub Bot logged work on HDFS-15961: - Author: ASF GitHub Bot Created on: 23/Apr/21 03:58 Start Date: 23/Apr/21 03:58 Worklog Time Spent: 10m Work Description: bshashikant commented on a change in pull request #2881: URL: https://github.com/apache/hadoop/pull/2881#discussion_r618922109 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java ## @@ -98,6 +98,7 @@ public void setupCluster() throws Exception { conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, BLOCK_SIZE); conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1); conf.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1); +conf.setBoolean("dfs.namenode.snapshot.trashroot.enabled", false); Review comment: It is intended to be an hidden config similar to ordered snapshot deletion config. I would prefer to keep it as it is now. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587617) Time Spent: 3h (was: 2h 50m) > standby namenode failed to start ordered snapshot deletion is enabled while > having snapshottable directories > > > Key: HDFS-15961 > URL: https://issues.apache.org/jira/browse/HDFS-15961 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h > Remaining Estimate: 0h > > {code:java} > 2021-04-08 12:07:25,398 INFO > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new > storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866 > 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1: Could not provision Trash directory for existing snapshottable > directories. Exiting Namenode. > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: ==> > JVMShutdownHook.run() > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: > Signalling async audit cleanup to start. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587615=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587615 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 23/Apr/21 03:57 Start Date: 23/Apr/21 03:57 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #2942: URL: https://github.com/apache/hadoop/pull/2942 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587615) Time Spent: 6h 50m (was: 6h 40m) > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 6h 50m > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587616=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587616 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 23/Apr/21 03:57 Start Date: 23/Apr/21 03:57 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #2942: URL: https://github.com/apache/hadoop/pull/2942#issuecomment-825368238 Merged. Thanks, @virajjasani! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587616) Time Spent: 7h (was: 6h 50m) > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 7h > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop
[ https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reopened HDFS-15624: > Fix the SetQuotaByStorageTypeOp problem after updating hadoop > --- > > Key: HDFS-15624 > URL: https://issues.apache.org/jira/browse/HDFS-15624 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.4.0 >Reporter: YaYun Wang >Assignee: huangtianhua >Priority: Major > Labels: pull-request-available, release-blocker > Fix For: 3.4.0 > > Time Spent: 9h 40m > Remaining Estimate: 0h > > HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum > of StorageType. And, setting the quota by storageType depends on the > ordinal(), therefore, it may cause the setting of quota to be invalid after > upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop
[ https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329956#comment-17329956 ] Wei-Chiu Chuang commented on HDFS-15624: Hi looks like we have a problem here. I'm going to reopen this issue. For details, please see my discussion thread: https://lists.apache.org/thread.html/rbdd58fda1b528c345713f902c6a659fa1fc8671cbf67f59fc31e25ee%40%3Chdfs-dev.hadoop.apache.org%3E > Fix the SetQuotaByStorageTypeOp problem after updating hadoop > --- > > Key: HDFS-15624 > URL: https://issues.apache.org/jira/browse/HDFS-15624 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 3.4.0 >Reporter: YaYun Wang >Assignee: huangtianhua >Priority: Major > Labels: pull-request-available, release-blocker > Fix For: 3.4.0 > > Time Spent: 9h 40m > Remaining Estimate: 0h > > HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum > of StorageType. And, setting the quota by storageType depends on the > ordinal(), therefore, it may cause the setting of quota to be invalid after > upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15566) NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0
[ https://issues.apache.org/jira/browse/HDFS-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329955#comment-17329955 ] Wei-Chiu Chuang commented on HDFS-15566: HDFS-15624 blocks this jira. We need to find a solution. > NN restart fails after RollingUpgrade from 3.1.3/3.2.1 to 3.3.0 > > > Key: HDFS-15566 > URL: https://issues.apache.org/jira/browse/HDFS-15566 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Blocker > Attachments: HDFS-15566-001.patch, HDFS-15566-002.patch, > HDFS-15566-003.patch > > > * After rollingUpgrade NN from 3.1.3/3.2.1 to 3.3.0, if the NN is restarted, > it fails while replaying edit logs. > * HDFS-14922, HDFS-14924, and HDFS-15054 introduced the *modification time* > bits to the editLog transactions. > * When NN is restarted and the edit logs are replayed, the NN reads the old > layout version from the editLog file. When parsing the transactions, it > assumes that the transactions are also from the previous layout and hence > skips parsing the *modification time* bits. > * This cascades into reading the wrong set of bits for other fields and > leads to NN shutting down. > {noformat} > 2020-09-07 19:34:42,085 | DEBUG | main | Stopping client | Client.java:1361 > 2020-09-07 19:34:42,087 | ERROR | main | Failed to start namenode. | > NameNode.java:1751 > java.lang.IllegalArgumentException > at com.google.common.base.Preconditions.checkArgument(Preconditions.java:72) > at org.apache.hadoop.ipc.ClientId.toString(ClientId.java:56) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendRpcIdsToString(FSEditLogOp.java:318) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$700(FSEditLogOp.java:153) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$DeleteSnapshotOp.toString(FSEditLogOp.java:3606) > at java.lang.String.valueOf(String.java:2994) > at java.lang.StringBuilder.append(StringBuilder.java:131) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:305) > at > org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:188) > at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:932) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:779) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:337) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1136) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:742) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:654) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:716) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:959) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:932) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1674) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1744){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15974) RBF: Unable to display the datanode UI of the router
[ https://issues.apache.org/jira/browse/HDFS-15974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-15974: Fix Version/s: 3.3.1 > RBF: Unable to display the datanode UI of the router > > > Key: HDFS-15974 > URL: https://issues.apache.org/jira/browse/HDFS-15974 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf, ui >Affects Versions: 3.4.0 >Reporter: zhu >Assignee: zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: HDFS-15358-1.patch, image-2021-04-15-11-36-47-644.png > > Time Spent: 1.5h > Remaining Estimate: 0h > > Clicking the Datanodes tag on the Router UI does not respond. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15982) Deleted data on the Web UI must be saved to the trash
[ https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329927#comment-17329927 ] Mingliang Liu commented on HDFS-15982: -- Let's rename the JIRA subject and also PR by replacing "Web UI" with "HTTP API". HDFS "Web UI" is usually about the web portal that one can browse for information purpose. This JIRA is to change the "RESTful HTTP API", not about the Web UI. My only concern about this is that, the "Trash" concept is not a part of the FileSystem DELETE API. Changing this behavior may break existing applications that assumes storage will be released. It seems counter-intuitive that one can skipTrash from command line but can not using WebHDFS. Since keeping data in Trash for a while is usually a good idea, I think I'm fine with this feature proposal. Ideally we can expose -skipTrash parameter so users can choose. Meanwhile the default value should be true for all existing released branches (<=3.3) to make it backward-compatible. We can change default value from 3.4 though to make it enabled by default. When I explore I found [[HDFS-14320]] is all about the same idea and similar implementation. Do you guys want to post there and try with a collaboration to get this in? I did not look into that closely. CC: [~vjasani] [~bpatel] > Deleted data on the Web UI must be saved to the trash > -- > > Key: HDFS-15982 > URL: https://issues.apache.org/jira/browse/HDFS-15982 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Reporter: Bhavik Patel >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > If we delete the data from the Web UI then it should be first moved to > configured/default Trash directory and after the trash interval time, it > should be removed. currently, data directly removed from the system[This > behavior should be the same as CLI cmd] > > This can be helpful when the user accidentally deletes data from the Web UI. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587595=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587595 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 23/Apr/21 01:40 Start Date: 23/Apr/21 01:40 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #2942: URL: https://github.com/apache/hadoop/pull/2942#issuecomment-825322491 I confirmed TestBalancer and TestBalancerLongRunningTasks succeeded in my local environment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587595) Time Spent: 6h 40m (was: 6.5h) > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 6h 40m > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15358) RBF: Unify router datanode UI with namenode datanode UI
[ https://issues.apache.org/jira/browse/HDFS-15358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329918#comment-17329918 ] Takanobu Asanuma commented on HDFS-15358: - I'd like to backport this to branch-3.3 if there's no objection. > RBF: Unify router datanode UI with namenode datanode UI > --- > > Key: HDFS-15358 > URL: https://issues.apache.org/jira/browse/HDFS-15358 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15358-01.patch, HDFS-15358-02.patch, > RBF-After-01.png, RBF-After-02.png, RBF-After-03.png, RBF-Before.png > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15974) RBF: Unable to display the datanode UI of the router
[ https://issues.apache.org/jira/browse/HDFS-15974?focusedWorklogId=587591=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587591 ] ASF GitHub Bot logged work on HDFS-15974: - Author: ASF GitHub Bot Created on: 23/Apr/21 01:33 Start Date: 23/Apr/21 01:33 Worklog Time Spent: 10m Work Description: zhuxiangyi commented on pull request #2915: URL: https://github.com/apache/hadoop/pull/2915#issuecomment-825320338 > Merged. Thanks for your contribution, @zhuxiangyi. Thanks for your reviews, @goiri. Thanks a lot, @goiri @tasanuma . -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587591) Time Spent: 1.5h (was: 1h 20m) > RBF: Unable to display the datanode UI of the router > > > Key: HDFS-15974 > URL: https://issues.apache.org/jira/browse/HDFS-15974 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf, ui >Affects Versions: 3.4.0 >Reporter: zhu >Assignee: zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: HDFS-15358-1.patch, image-2021-04-15-11-36-47-644.png > > Time Spent: 1.5h > Remaining Estimate: 0h > > Clicking the Datanodes tag on the Router UI does not respond. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15974) RBF: Unable to display the datanode UI of the router
[ https://issues.apache.org/jira/browse/HDFS-15974?focusedWorklogId=587586=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587586 ] ASF GitHub Bot logged work on HDFS-15974: - Author: ASF GitHub Bot Created on: 23/Apr/21 01:20 Start Date: 23/Apr/21 01:20 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #2915: URL: https://github.com/apache/hadoop/pull/2915#issuecomment-825316164 Merged. Thanks for your contribution, @zhuxiangyi. Thanks for your reviews, @goiri. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587586) Time Spent: 1h 20m (was: 1h 10m) > RBF: Unable to display the datanode UI of the router > > > Key: HDFS-15974 > URL: https://issues.apache.org/jira/browse/HDFS-15974 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf, ui >Affects Versions: 3.4.0 >Reporter: zhu >Priority: Major > Labels: pull-request-available > Attachments: HDFS-15358-1.patch, image-2021-04-15-11-36-47-644.png > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Clicking the Datanodes tag on the Router UI does not respond. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15974) RBF: Unable to display the datanode UI of the router
[ https://issues.apache.org/jira/browse/HDFS-15974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma reassigned HDFS-15974: --- Assignee: zhu > RBF: Unable to display the datanode UI of the router > > > Key: HDFS-15974 > URL: https://issues.apache.org/jira/browse/HDFS-15974 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf, ui >Affects Versions: 3.4.0 >Reporter: zhu >Assignee: zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: HDFS-15358-1.patch, image-2021-04-15-11-36-47-644.png > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Clicking the Datanodes tag on the Router UI does not respond. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-15974) RBF: Unable to display the datanode UI of the router
[ https://issues.apache.org/jira/browse/HDFS-15974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HDFS-15974. - Fix Version/s: 3.4.0 Resolution: Fixed > RBF: Unable to display the datanode UI of the router > > > Key: HDFS-15974 > URL: https://issues.apache.org/jira/browse/HDFS-15974 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf, ui >Affects Versions: 3.4.0 >Reporter: zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: HDFS-15358-1.patch, image-2021-04-15-11-36-47-644.png > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Clicking the Datanodes tag on the Router UI does not respond. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15974) RBF: Unable to display the datanode UI of the router
[ https://issues.apache.org/jira/browse/HDFS-15974?focusedWorklogId=587585=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587585 ] ASF GitHub Bot logged work on HDFS-15974: - Author: ASF GitHub Bot Created on: 23/Apr/21 01:19 Start Date: 23/Apr/21 01:19 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #2915: URL: https://github.com/apache/hadoop/pull/2915 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587585) Time Spent: 1h 10m (was: 1h) > RBF: Unable to display the datanode UI of the router > > > Key: HDFS-15974 > URL: https://issues.apache.org/jira/browse/HDFS-15974 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf, ui >Affects Versions: 3.4.0 >Reporter: zhu >Priority: Major > Labels: pull-request-available > Attachments: HDFS-15358-1.patch, image-2021-04-15-11-36-47-644.png > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Clicking the Datanodes tag on the Router UI does not respond. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587537=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587537 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 22/Apr/21 21:34 Start Date: 22/Apr/21 21:34 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2942: URL: https://github.com/apache/hadoop/pull/2942#issuecomment-825200562 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 21m 2s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 37s | | branch-3.3 passed | | +1 :green_heart: | compile | 1m 15s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 48s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 1m 21s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 1m 25s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 3m 8s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 17m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed | | +1 :green_heart: | javac | 1m 8s | | hadoop-hdfs-project_hadoop-hdfs generated 0 new + 537 unchanged - 1 fixed = 537 total (was 538) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 41s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 149 unchanged - 49 fixed = 156 total (was 198) | | +1 :green_heart: | mvnsite | 1m 17s | | the patch passed | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed | | +1 :green_heart: | spotbugs | 3m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 188m 20s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 294m 18s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithValidator | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.server.balancer.TestBalancer | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2942 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 3df541248b1b 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 974bbc8582980e31fa3155db208b47f48bf34fb4 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/2/testReport/ | | Max. process+thread count | 3560 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/2/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587517=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587517 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 22/Apr/21 20:54 Start Date: 22/Apr/21 20:54 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2943: URL: https://github.com/apache/hadoop/pull/2943#issuecomment-825178660 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 21s | | branch-3.2 passed | | +1 :green_heart: | compile | 1m 2s | | branch-3.2 passed | | +1 :green_heart: | checkstyle | 0m 47s | | branch-3.2 passed | | +1 :green_heart: | mvnsite | 1m 14s | | branch-3.2 passed | | +1 :green_heart: | javadoc | 1m 1s | | branch-3.2 passed | | +1 :green_heart: | spotbugs | 2m 44s | | branch-3.2 passed | | +1 :green_heart: | shadedclient | 14m 48s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 5s | | the patch passed | | +1 :green_heart: | compile | 0m 57s | | the patch passed | | +1 :green_heart: | javac | 0m 57s | | hadoop-hdfs-project_hadoop-hdfs generated 0 new + 550 unchanged - 1 fixed = 550 total (was 551) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 40s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 149 unchanged - 49 fixed = 156 total (was 198) | | +1 :green_heart: | mvnsite | 1m 5s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed | | +1 :green_heart: | spotbugs | 2m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 179m 49s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 252m 32s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.TestSetrepIncreasing | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2943 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux f042f47694a5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.2 / 4365e97bd1d04df67fd8191c82573940a338ba94 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/2/testReport/ | | Max. process+thread count | 3107 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/2/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated
[jira] [Commented] (HDFS-15967) Improve the log for Short Circuit Local Reads
[ https://issues.apache.org/jira/browse/HDFS-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329331#comment-17329331 ] Hadoop QA commented on HDFS-15967: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 30s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 59s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 10s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 24m 6s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 3m 35s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 44s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | |
[jira] [Commented] (HDFS-15994) Deletion should sleep some time, when there are too many pending deletion blocks.
[ https://issues.apache.org/jira/browse/HDFS-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329295#comment-17329295 ] Hadoop QA commented on HDFS-15994: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 14s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 42s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 23s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 24m 23s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 3m 35s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 3s{color} | {color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/581/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 626 unchanged - 0 fixed = 627 total (was 626) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green}{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green}
[jira] [Work logged] (HDFS-15810) RBF: RBFMetrics's TotalCapacity out of bounds
[ https://issues.apache.org/jira/browse/HDFS-15810?focusedWorklogId=587370=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587370 ] ASF GitHub Bot logged work on HDFS-15810: - Author: ASF GitHub Bot Created on: 22/Apr/21 16:53 Start Date: 22/Apr/21 16:53 Worklog Time Spent: 10m Work Description: aajisaka edited a comment on pull request #2910: URL: https://github.com/apache/hadoop/pull/2910#issuecomment-822196865 Would you update the web UI to use the new metrics? https://github.com/apache/hadoop/blob/486ddb73f693177787e4abff7c932be9b925/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html#L116-L118 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587370) Time Spent: 1h 50m (was: 1h 40m) > RBF: RBFMetrics's TotalCapacity out of bounds > - > > Key: HDFS-15810 > URL: https://issues.apache.org/jira/browse/HDFS-15810 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoxing Wei >Assignee: Fengnan Li >Priority: Major > Labels: pull-request-available > Attachments: image-2021-02-02-10-59-17-113.png > > Time Spent: 1h 50m > Remaining Estimate: 0h > > The Long type fields TotalCapacity,UsedCapacity and RemainingCapacity in > RBFMetrics maybe ** out of bounds. > !image-2021-02-02-10-59-17-113.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
[ https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=587359=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587359 ] ASF GitHub Bot logged work on HDFS-15961: - Author: ASF GitHub Bot Created on: 22/Apr/21 16:32 Start Date: 22/Apr/21 16:32 Worklog Time Spent: 10m Work Description: ayushtkn commented on a change in pull request #2881: URL: https://github.com/apache/hadoop/pull/2881#discussion_r618435518 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -8575,22 +8587,21 @@ public void checkAndProvisionSnapshotTrashRoots() { currDir += Path.SEPARATOR; } String trashPath = currDir + FileSystem.TRASH_PREFIX; - HdfsFileStatus fileStatus = getFileInfo(trashPath, false, false, false); + HdfsFileStatus fileStatus = + getFileInfo(trashPath, false, false, false); if (fileStatus == null) { -LOG.info("Trash doesn't exist for snapshottable directory {}. " + "Creating trash at {}", currDir, trashPath); +LOG.info("Trash doesn't exist for snapshottable directory {}. " ++ "Creating trash at {}", currDir, trashPath); PermissionStatus permissionStatus = new PermissionStatus(getRemoteUser().getShortUserName(), null, SHARED_TRASH_PERMISSION); mkdirs(trashPath, permissionStatus, false); } } } catch (IOException e) { -final String msg = -"Could not provision Trash directory for existing " -+ "snapshottable directories. Exiting Namenode."; -ExitUtil.terminate(1, msg); +LOG.error("Could not provision Trash directory for existing " ++ "snapshottable directory", e); Review comment: Can you add the path as well in the log message. Hope the Admin checks the logs after failover & restarts, and fixes things. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java ## @@ -98,6 +98,7 @@ public void setupCluster() throws Exception { conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, BLOCK_SIZE); conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1); conf.setInt(DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY, 1); +conf.setBoolean("dfs.namenode.snapshot.trashroot.enabled", false); Review comment: Can we move this config in `DFSConfigKyes` and use it from there, As of now I think it is defined in `FsNamesystem` which doesn't makes, The value of this I guess is exposed via getServerDefaults also so no point keeping it in `FsN` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587359) Time Spent: 2h 50m (was: 2h 40m) > standby namenode failed to start ordered snapshot deletion is enabled while > having snapshottable directories > > > Key: HDFS-15961 > URL: https://issues.apache.org/jira/browse/HDFS-15961 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 50m > Remaining Estimate: 0h > > {code:java} > 2021-04-08 12:07:25,398 INFO > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new > storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866 > 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1: Could not provision Trash directory for existing snapshottable > directories. Exiting Namenode. > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: ==> > JVMShutdownHook.run() > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: > Signalling async audit cleanup to start. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15967) Improve the log for Short Circuit Local Reads
[ https://issues.apache.org/jira/browse/HDFS-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329200#comment-17329200 ] Takanobu Asanuma commented on HDFS-15967: - [~bpatel] Thanks for updating the patch. +1 on [^HDFS-15967.002.patch], pending Jenkins. > Improve the log for Short Circuit Local Reads > - > > Key: HDFS-15967 > URL: https://issues.apache.org/jira/browse/HDFS-15967 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bhavik Patel >Assignee: Bhavik Patel >Priority: Minor > Attachments: HDFS-15967.001.patch, HDFS-15967.002.patch > > > Improve the log for Short Circuit Local Reads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes
[ https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=587290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587290 ] ASF GitHub Bot logged work on HDFS-15989: - Author: ASF GitHub Bot Created on: 22/Apr/21 15:19 Start Date: 22/Apr/21 15:19 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #2942: URL: https://github.com/apache/hadoop/pull/2942#issuecomment-824934699 @virajjasani Thanks for the PRs. Could you remove unused imports in `TestBalancerLongRunningTasks`? There are the same unused imports in the PRs for branch-3.2 and branch-3.1. The others look good to me. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587290) Time Spent: 6h 10m (was: 6h) > Split TestBalancer into two classes > --- > > Key: HDFS-15989 > URL: https://issues.apache.org/jira/browse/HDFS-15989 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 6h 10m > Remaining Estimate: 0h > > TestBalancer has many tests accumulated, it would be good to split it up into > two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should > also resolve it with this Jira. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15967) Improve the log for Short Circuit Local Reads
[ https://issues.apache.org/jira/browse/HDFS-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327267#comment-17327267 ] Bhavik Patel commented on HDFS-15967: - [~tasanuma] Thanks for the review. Incorporated review comments and attached the latest patch. > Improve the log for Short Circuit Local Reads > - > > Key: HDFS-15967 > URL: https://issues.apache.org/jira/browse/HDFS-15967 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bhavik Patel >Assignee: Bhavik Patel >Priority: Minor > Attachments: HDFS-15967.001.patch, HDFS-15967.002.patch > > > Improve the log for Short Circuit Local Reads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters
[ https://issues.apache.org/jira/browse/HDFS-15923?focusedWorklogId=587159=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587159 ] ASF GitHub Bot logged work on HDFS-15923: - Author: ASF GitHub Bot Created on: 22/Apr/21 10:42 Start Date: 22/Apr/21 10:42 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2819: URL: https://github.com/apache/hadoop/pull/2819#issuecomment-824732441 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 34s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 18s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 16 new + 0 unchanged - 0 fixed = 16 total (was 0) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 18m 17s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 94m 15s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2819 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml spotbugs checkstyle | | uname | Linux 50f94adff27d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 561d305b9953f81ba4261fe78d0d9f4f7d4f1c83 | | Default Java | Private
[jira] [Updated] (HDFS-15967) Improve the log for Short Circuit Local Reads
[ https://issues.apache.org/jira/browse/HDFS-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bhavik Patel updated HDFS-15967: Attachment: HDFS-15967.002.patch > Improve the log for Short Circuit Local Reads > - > > Key: HDFS-15967 > URL: https://issues.apache.org/jira/browse/HDFS-15967 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Bhavik Patel >Assignee: Bhavik Patel >Priority: Minor > Attachments: HDFS-15967.001.patch, HDFS-15967.002.patch > > > Improve the log for Short Circuit Local Reads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13831) Make block increment deletion number configurable
[ https://issues.apache.org/jira/browse/HDFS-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327227#comment-17327227 ] Qi Zhu commented on HDFS-13831: --- [~weichiu] [~linyiqun] [~gaofeng6] I created HDFS-15994 to improve this more usable in huge clusters with heavy deletion. Thanks. > Make block increment deletion number configurable > - > > Key: HDFS-13831 > URL: https://issues.apache.org/jira/browse/HDFS-13831 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Yiqun Lin >Assignee: Ryan Wu >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.0.4, 3.1.2 > > Attachments: HDFS-13831.001.patch, HDFS-13831.002.patch, > HDFS-13831.003.patch, HDFS-13831.004.patch, HDFS-13831.branch-3.0.001.patch > > > When NN deletes a large directory, it will hold the write lock long time. For > improving this, we remove the blocks in a batch way. So that other waiters > have a chance to get the lock. But right now, the batch number is a > hard-coded value. > {code:java} > static int BLOCK_DELETION_INCREMENT = 1000; > {code} > We can make this value configurable, so that we can control the frequency of > other waiters to get the lock chance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15994) Deletion should sleep some time, when there are too many pending deletion blocks.
[ https://issues.apache.org/jira/browse/HDFS-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327218#comment-17327218 ] Qi Zhu edited comment on HDFS-15994 at 4/22/21, 9:50 AM: - cc [~weichiu] [~hexiaoqiao] [~sodonnell] [~ayushtkn] [~linyiqun] [~jianliang.wu] I submitted a patch for review, i think we should improve the block deletions in huge clusters. What's your opinions about this? Thanks. was (Author: zhuqi): cc [~weichiu] [~hexiaoqiao] [~sodonnell] [~ayushtkn] [~linyiqun] [~jianliang.wu] I submitted a patch for review, i think we should improve the block deletions in huge clusters.-- What's your opinions about this? Thanks. > Deletion should sleep some time, when there are too many pending deletion > blocks. > - > > Key: HDFS-15994 > URL: https://issues.apache.org/jira/browse/HDFS-15994 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: HDFS-15994.001.patch > > > HDFS-13831 realize that we can control the frequency of other waiters to get > the lock chance. > But actually in our big cluster with heavy deletion: > The problem still happened, and the pending deletion blocks will be more > than ten million somtimes, and the size become more than 1 million in regular > in huge clusters. > So i think we should sleep for some time when pending too many deletion > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15994) Deletion should sleep some time, when there are too many pending deletion blocks.
[ https://issues.apache.org/jira/browse/HDFS-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327218#comment-17327218 ] Qi Zhu commented on HDFS-15994: --- cc [~weichiu] [~hexiaoqiao] [~sodonnell] [~ayushtkn] [~linyiqun] [~jianliang.wu] I submitted a patch for review, i think we should improve the block deletions in huge clusters.-- What's your opinions about this? Thanks. > Deletion should sleep some time, when there are too many pending deletion > blocks. > - > > Key: HDFS-15994 > URL: https://issues.apache.org/jira/browse/HDFS-15994 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: HDFS-15994.001.patch > > > HDFS-13831 realize that we can control the frequency of other waiters to get > the lock chance. > But actually in our big cluster with heavy deletion: > The problem still happened, and the pending deletion blocks will be more > than ten million somtimes, and the size become more than 1 million in regular > in huge clusters. > So i think we should sleep for some time when pending too many deletion > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15994) Deletion should sleep some time, when there are too many pending deletion blocks.
[ https://issues.apache.org/jira/browse/HDFS-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qi Zhu updated HDFS-15994: -- Attachment: HDFS-15994.001.patch Status: Patch Available (was: Open) > Deletion should sleep some time, when there are too many pending deletion > blocks. > - > > Key: HDFS-15994 > URL: https://issues.apache.org/jira/browse/HDFS-15994 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > Attachments: HDFS-15994.001.patch > > > HDFS-13831 realize that we can control the frequency of other waiters to get > the lock chance. > But actually in our big cluster with heavy deletion: > The problem still happened, and the pending deletion blocks will be more > than ten million somtimes, and the size become more than 1 million in regular > in huge clusters. > So i think we should sleep for some time when pending too many deletion > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15994) Deletion should sleep some time, when there are too many pending deletion blocks.
[ https://issues.apache.org/jira/browse/HDFS-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qi Zhu updated HDFS-15994: -- Description: HDFS-13831 realize that we can control the frequency of other waiters to get the lock chance. But actually in our big cluster with heavy deletion: The problem still happened, and the pending deletion blocks will be more than ten million somtimes, and the size become more than 1 million in regular in huge clusters. So i think we should sleep for some time when pending too many deletion blocks. > Deletion should sleep some time, when there are too many pending deletion > blocks. > - > > Key: HDFS-15994 > URL: https://issues.apache.org/jira/browse/HDFS-15994 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Major > > HDFS-13831 realize that we can control the frequency of other waiters to get > the lock chance. > But actually in our big cluster with heavy deletion: > The problem still happened, and the pending deletion blocks will be more > than ten million somtimes, and the size become more than 1 million in regular > in huge clusters. > So i think we should sleep for some time when pending too many deletion > blocks. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15994) Deletion should sleep some time, when there are too many pending deletion blocks.
Qi Zhu created HDFS-15994: - Summary: Deletion should sleep some time, when there are too many pending deletion blocks. Key: HDFS-15994 URL: https://issues.apache.org/jira/browse/HDFS-15994 Project: Hadoop HDFS Issue Type: Improvement Reporter: Qi Zhu Assignee: Qi Zhu -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories
[ https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=587126=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587126 ] ASF GitHub Bot logged work on HDFS-15961: - Author: ASF GitHub Bot Created on: 22/Apr/21 08:30 Start Date: 22/Apr/21 08:30 Worklog Time Spent: 10m Work Description: bshashikant commented on pull request #2881: URL: https://github.com/apache/hadoop/pull/2881#issuecomment-824649349 @ayushtkn /@smengcl any further thoughts/reviews? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587126) Time Spent: 2h 40m (was: 2.5h) > standby namenode failed to start ordered snapshot deletion is enabled while > having snapshottable directories > > > Key: HDFS-15961 > URL: https://issues.apache.org/jira/browse/HDFS-15961 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Affects Versions: 3.4.0 >Reporter: Nilotpal Nandi >Assignee: Shashikant Banerjee >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 40m > Remaining Estimate: 0h > > {code:java} > 2021-04-08 12:07:25,398 INFO > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new > storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866 > 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1: Could not provision Trash directory for existing snapshottable > directories. Exiting Namenode. > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: ==> > JVMShutdownHook.run() > 2021-04-08 12:07:55,596 INFO > org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: > Signalling async audit cleanup to start. > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15982) Deleted data on the Web UI must be saved to the trash
[ https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=587095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587095 ] ASF GitHub Bot logged work on HDFS-15982: - Author: ASF GitHub Bot Created on: 22/Apr/21 07:19 Start Date: 22/Apr/21 07:19 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2927: URL: https://github.com/apache/hadoop/pull/2927#issuecomment-824604340 @jojochuang if you have some cycles to review this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 587095) Time Spent: 1h 50m (was: 1h 40m) > Deleted data on the Web UI must be saved to the trash > -- > > Key: HDFS-15982 > URL: https://issues.apache.org/jira/browse/HDFS-15982 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs >Reporter: Bhavik Patel >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > If we delete the data from the Web UI then it should be first moved to > configured/default Trash directory and after the trash interval time, it > should be removed. currently, data directly removed from the system[This > behavior should be the same as CLI cmd] > > This can be helpful when the user accidentally deletes data from the Web UI. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type
[ https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327104#comment-17327104 ] Qi Zhu edited comment on HDFS-14525 at 4/22/21, 6:52 AM: - [~prabhujoseph] [~eyang] [~daryn] I also think this is needed, if we can add an option to support: We can add an option to allow this two independent, hadoop.security.authentication is specific to RPC Authentication whereas hadoop.http.authentication.type is specific to HTTP Authentication. We want to make HTTP not authentication, but RPC Authentication. How to handle the case: 1. The HTTP authentication is simple, an we don't want to set browser access with keytab. 2. The service RPC is kerberos based. 3. The webhdfs we want to use the kerberos also. But with the HADOOP-16354 The JspHelper#getugi : {code:java} if (UserGroupInformation.isSecurityEnabled()) { remoteUser = request.getRemoteUser(); final String tokenString = request.getParameter(DELEGATION_PARAMETER_NAME); if (tokenString != null) { // user.name, doas param is ignored in the token-based auth ugi = getTokenUGI(context, request, tokenString, conf); } else if (remoteUser == null) { throw new IOException( "Security enabled but user not authenticated by filter"); } } {code} Will get null remoteUser here, because we don't get a principal for simple way. the command : hadoop fs -ls webhdfs://host:port/ will throw "Security enabled but user not authenticated by filter". What's your opinions and how the solve it? Thanks. was (Author: zhuqi): [~prabhujoseph] I also think this is needed, if we can add an option to support: We can add an option to allow this two independent, hadoop.security.authentication is specific to RPC Authentication whereas hadoop.http.authentication.type is specific to HTTP Authentication. We want to make HTTP not authentication, but RPC Authentication. > JspHelper ignores hadoop.http.authentication.type > - > > Key: HDFS-14525 > URL: https://issues.apache.org/jira/browse/HDFS-14525 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.2.0 >Reporter: Prabhu Joseph >Priority: Major > > On Secure Cluster With hadoop.http.authentication.type simple and > hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails > when user.name is not set. It runs fine if user.name=ambari-qa is set.. > {code} > [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H > 'Content-Length: 0' --negotiate -u : > 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS' > {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed > to obtain user group information: java.io.IOException: Security enabled but > user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ > {code} > JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of > conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http > is Secure causing the issue. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org