[jira] [Work logged] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup
[ https://issues.apache.org/jira/browse/HADOOP-15129?focusedWorklogId=646531=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646531 ] ASF GitHub Bot logged work on HADOOP-15129: --- Author: ASF GitHub Bot Created on: 03/Sep/21 23:56 Start Date: 03/Sep/21 23:56 Worklog Time Spent: 10m Work Description: cnauroth closed pull request #3348: URL: https://github.com/apache/hadoop/pull/3348 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646531) Time Spent: 0.5h (was: 20m) > Datanode caches namenode DNS lookup failure and cannot startup > -- > > Key: HADOOP-15129 > URL: https://issues.apache.org/jira/browse/HADOOP-15129 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.2 > Environment: Google Compute Engine. > I'm using Java 8, Debian 8, Hadoop 2.8.2. >Reporter: Karthik Palaniappan >Assignee: Chris Nauroth >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2, 3.2.4 > > Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > On startup, the Datanode creates an InetSocketAddress to register with each > namenode. Though there are retries on connection failure throughout the > stack, the same InetSocketAddress is reused. > InetSocketAddress is an interesting class, because it resolves DNS names to > IP addresses on construction, and it is never refreshed. Hadoop re-creates an > InetSocketAddress in some cases just in case the remote IP has changed for a > particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472. > Anyway, on startup, you cna see the Datanode log: "Namenode...remains > unresolved" -- referring to the fact that DNS lookup failed. > {code:java} > 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Refresh request received for nameservices: null > 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode > for null remains unresolved for ID null. Check your hdfs-site.xml file to > ensure namenodes are configured properly. > 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Starting BPOfferServices for nameservices: > 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Block pool (Datanode Uuid unassigned) service to > cluster-32f5-m:8020 starting to offer service > {code} > The Datanode then proceeds to use this unresolved address, as it may work if > the DN is configured to use a proxy. Since I'm not using a proxy, it forever > prints out this message: > {code:java} > 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > {code} > Unfortunately, the log doesn't contain the exception that triggered it, but > the culprit is actually in IPC Client: > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444. > This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 > to give a clear error message when somebody mispells an address. > However, the fix in HADOOP-7472 doesn't apply here, because that code happens > in Client#getConnection after the Connection is constructed. > My proposed fix (will attach a patch) is to move this exception out of the > constructor and into a place that will trigger HADOOP-7472's logic to > re-resolve addresses. If the DNS failure was temporary, this will allow the > connection to succeed. If not, the connection will fail after ipc client > retries (default 10 seconds worth of retries). > I want to fix this in ipc client rather than just in Datanode startup, as > this fixes temporary DNS issues for all of Hadoop. -- This message was sent by
[GitHub] [hadoop] cnauroth closed pull request #3348: HADOOP-15129. Datanode caches namenode DNS lookup failure and cannot …
cnauroth closed pull request #3348: URL: https://github.com/apache/hadoop/pull/3348 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646516=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646516 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 23:15 Start Date: 03/Sep/21 23:15 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#issuecomment-912858578 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 39s | | trunk passed | | +1 :green_heart: | compile | 22m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 24s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 21m 58s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 21m 58s | | the patch passed | | +1 :green_heart: | compile | 19m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 0s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 1 new + 61 unchanged - 2 fixed = 62 total (was 63) | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 37s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 45s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 6s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 189m 16s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3377 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux de8f4df59f6d 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3442e671682c2717cd7297b4ad009420adde8876 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/3/testReport/ | | Max. process+thread count | 2993 (vs. ulimit of 5500) | | modules | C:
[GitHub] [hadoop] hadoop-yetus commented on pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
hadoop-yetus commented on pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#issuecomment-912858578 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 39s | | trunk passed | | +1 :green_heart: | compile | 22m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 24s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 21m 58s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 21m 58s | | the patch passed | | +1 :green_heart: | compile | 19m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 0s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 1 new + 61 unchanged - 2 fixed = 62 total (was 63) | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 37s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 45s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 6s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 189m 16s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3377 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux de8f4df59f6d 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3442e671682c2717cd7297b4ad009420adde8876 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/3/testReport/ | | Max. process+thread count | 2993 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646475=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646475 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 20:57 Start Date: 03/Sep/21 20:57 Worklog Time Spent: 10m Work Description: viirya commented on pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#issuecomment-912806208 I did a microbenchmark by running 10 times of compressing/decompresing random data. The average time: After: 12.93s Before: 13.52s It is pretty close. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646475) Time Spent: 3h 10m (was: 3h) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
viirya commented on pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#issuecomment-912806208 I did a microbenchmark by running 10 times of compressing/decompresing random data. The average time: After: 12.93s Before: 13.52s It is pretty close. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17773) Avoid using zookeeper deprecated API and classes
[ https://issues.apache.org/jira/browse/HADOOP-17773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Renukaprasad C resolved HADOOP-17773. - Resolution: Duplicate > Avoid using zookeeper deprecated API and classes > > > Key: HADOOP-17773 > URL: https://issues.apache.org/jira/browse/HADOOP-17773 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > In latest version of zookeeper some internal classes are removed which is > used in hadoop test code, for example ServerCnxnFactoryAccessor. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646453=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646453 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 20:05 Start Date: 03/Sep/21 20:05 Worklog Time Spent: 10m Work Description: viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702136248 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java ## @@ -1051,4 +1052,45 @@ public void testCodecPoolAndGzipDecompressor() { } } } + + @Test(timeout=2) + public void testGzipCompressorWithEmptyInput() throws IOException { Review comment: In current trunk, this test will cause: ``` org.junit.runners.model.TestTimedOutException: test timed out after 2 milliseconds at org.apache.hadoop.io.compress.TestCodec.testGzipCompressorWithEmptyInput(TestCodec.java:1076) ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646453) Time Spent: 3h (was: 2h 50m) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on a change in pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702136248 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodec.java ## @@ -1051,4 +1052,45 @@ public void testCodecPoolAndGzipDecompressor() { } } } + + @Test(timeout=2) + public void testGzipCompressorWithEmptyInput() throws IOException { Review comment: In current trunk, this test will cause: ``` org.junit.runners.model.TestTimedOutException: test timed out after 2 milliseconds at org.apache.hadoop.io.compress.TestCodec.testGzipCompressorWithEmptyInput(TestCodec.java:1076) ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646452=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646452 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 20:04 Start Date: 03/Sep/21 20:04 Worklog Time Spent: 10m Work Description: viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702135937 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: Added one test for empty input case. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646452) Time Spent: 2h 50m (was: 2h 40m) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on a change in pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702135937 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: Added one test for empty input case. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646447 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 19:51 Start Date: 03/Sep/21 19:51 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#issuecomment-912773782 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 28s | | trunk passed | | +1 :green_heart: | compile | 22m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 20s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 22m 10s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 22m 10s | | the patch passed | | +1 :green_heart: | compile | 19m 17s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 58s | | hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8) | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 59s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 189m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3377 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 3b7034dde0b3 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 728a659c3964d81612c917542426a9786984b1c9 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/2/testReport/ | | Max. process+thread count | 1852 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console
[GitHub] [hadoop] hadoop-yetus commented on pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
hadoop-yetus commented on pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#issuecomment-912773782 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 28s | | trunk passed | | +1 :green_heart: | compile | 22m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 20s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 22m 10s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 22m 10s | | the patch passed | | +1 :green_heart: | compile | 19m 17s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 58s | | hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 2 fixed = 6 total (was 8) | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 59s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 189m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3377 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 3b7034dde0b3 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 728a659c3964d81612c917542426a9786984b1c9 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/2/testReport/ | | Max. process+thread count | 1852 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3377/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the
[jira] [Updated] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup
[ https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-15129: --- Fix Version/s: 3.2.4 3.3.2 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) I have committed this to trunk, branch-3.3 and branch-3.2. I didn't end up merging down to the 2.x line like I said I would, because I retested on 2.x, and the bug isn't present there. [~Karthik Palaniappan], thank you for providing the original patch. Thank you to all of the reviewers and [~ywskycn] for the final review. > Datanode caches namenode DNS lookup failure and cannot startup > -- > > Key: HADOOP-15129 > URL: https://issues.apache.org/jira/browse/HADOOP-15129 > Project: Hadoop Common > Issue Type: Bug > Components: ipc >Affects Versions: 2.8.2 > Environment: Google Compute Engine. > I'm using Java 8, Debian 8, Hadoop 2.8.2. >Reporter: Karthik Palaniappan >Assignee: Chris Nauroth >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2, 3.2.4 > > Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch > > Time Spent: 20m > Remaining Estimate: 0h > > On startup, the Datanode creates an InetSocketAddress to register with each > namenode. Though there are retries on connection failure throughout the > stack, the same InetSocketAddress is reused. > InetSocketAddress is an interesting class, because it resolves DNS names to > IP addresses on construction, and it is never refreshed. Hadoop re-creates an > InetSocketAddress in some cases just in case the remote IP has changed for a > particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472. > Anyway, on startup, you cna see the Datanode log: "Namenode...remains > unresolved" -- referring to the fact that DNS lookup failed. > {code:java} > 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Refresh request received for nameservices: null > 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode > for null remains unresolved for ID null. Check your hdfs-site.xml file to > ensure namenodes are configured properly. > 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Starting BPOfferServices for nameservices: > 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: > Block pool (Datanode Uuid unassigned) service to > cluster-32f5-m:8020 starting to offer service > {code} > The Datanode then proceeds to use this unresolved address, as it may work if > the DN is configured to use a proxy. Since I'm not using a proxy, it forever > prints out this message: > {code:java} > 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: > Problem connecting to server: cluster-32f5-m:8020 > {code} > Unfortunately, the log doesn't contain the exception that triggered it, but > the culprit is actually in IPC Client: > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444. > This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 > to give a clear error message when somebody mispells an address. > However, the fix in HADOOP-7472 doesn't apply here, because that code happens > in Client#getConnection after the Connection is constructed. > My proposed fix (will attach a patch) is to move this exception out of the > constructor and into a place that will trigger HADOOP-7472's logic to > re-resolve addresses. If the DNS failure was temporary, this will allow the > connection to succeed. If not, the connection will fail after ipc client > retries (default 10 seconds worth of retries). > I want to fix this in ipc client rather than just in Datanode startup, as > this fixes temporary DNS issues for all of Hadoop. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] prasad-acit commented on pull request #3334: HDFS-16186 Datanode kicks out hard disk logic optimization
prasad-acit commented on pull request #3334: URL: https://github.com/apache/hadoop/pull/3334#issuecomment-912758502 > @jianghuazhu Hello, I’m a novice, I’m not sure if the patch failed is related to my code, can you help me? Checkstyle issues are because of new code. Take a look at the report - https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3334/7/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt Failed tests you can run locally with & without the patch changes. Looks like impacted with the patch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3362: HDFS-16199. Resolve log placeholders in NamenodeBeanMetrics
hadoop-yetus commented on pull request #3362: URL: https://github.com/apache/hadoop/pull/3362#issuecomment-912755390 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 11s | | trunk passed | | +1 :green_heart: | compile | 0m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 17s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 41s | | the patch passed | | +1 :green_heart: | javadoc | 0m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 34m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 123m 18s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3362 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 580b0cefb209 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 23b02a9d59d494705916338722ae437b4d8e3687 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/3/testReport/ | | Max. process+thread count | 2298 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[GitHub] [hadoop] prasad-acit commented on pull request #3351: HDFS-16191. [FGL] Fix FSImage loading issues on dynamic partitions
prasad-acit commented on pull request #3351: URL: https://github.com/apache/hadoop/pull/3351#issuecomment-912736057 @shvachko In org.apache.hadoop.hdfs.server.namenode.INode#indexOf(), index is calculated based on static partition count. Will it has any impact on dynamic partitions? I couldnt get this part, plz correct if i am wrong. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] younes-b opened a new pull request #3383: LAKE-14976
younes-b opened a new pull request #3383: URL: https://github.com/apache/hadoop/pull/3383 ### Description of PR Backport of : For libhdfs -> https://github.com/criteo-forks/hadoop-common/pull/95/files For OOPS -> https://github.com/criteo-forks/hadoop-common/pull/107/files ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3346: HDFS-16188. RBF: Router to support resolving monitored namenodes with DNS
hadoop-yetus commented on pull request #3346: URL: https://github.com/apache/hadoop/pull/3346#issuecomment-912694724 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 28s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 10s | | trunk passed | | +1 :green_heart: | compile | 23m 7s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 55s | | trunk passed | | +1 :green_heart: | javadoc | 3m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 54s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 9m 55s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 31s | | the patch passed | | +1 :green_heart: | compile | 22m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 22m 22s | | the patch passed | | +1 :green_heart: | compile | 19m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 22s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 52s | | root: The patch generated 0 new + 50 unchanged - 1 fixed = 50 total (was 51) | | +1 :green_heart: | mvnsite | 5m 0s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 3m 50s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 5m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 11m 14s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 13s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 37s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 345m 45s | | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 37m 25s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 624m 43s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3346 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 296d15a8f07c 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / fc1ebc30c7f0349a3fe9e449e700d9ca01fd80d3 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3382: YARN-10919. Remove LeafQueue#scheduler field
hadoop-yetus commented on pull request #3382: URL: https://github.com/apache/hadoop/pull/3382#issuecomment-912693983 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 54s | | trunk passed | | +1 :green_heart: | compile | 1m 2s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 52s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 58s | | trunk passed | | +1 :green_heart: | javadoc | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 53s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 52s | | the patch passed | | +1 :green_heart: | compile | 0m 56s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 0m 47s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 47s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 38s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 52s | | the patch passed | | +1 :green_heart: | javadoc | 0m 37s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 34s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 99m 17s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 183m 29s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3382/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3382 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 851f1299c5b9 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 93c23e6ccfcaafcad72b01e1e4849838926e1154 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3382/1/testReport/ | | Max. process+thread count | 900 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3382/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646381=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646381 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 16:58 Start Date: 03/Sep/21 16:58 Worklog Time Spent: 10m Work Description: sunchao commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702043684 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646381) Time Spent: 2.5h (was: 2h 20m) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
sunchao commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702043684 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646380=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646380 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 16:56 Start Date: 03/Sep/21 16:56 Worklog Time Spent: 10m Work Description: viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702042355 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: i see. let me add one then. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646380) Time Spent: 2h 20m (was: 2h 10m) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on a change in pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702042355 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: i see. let me add one then. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646379=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646379 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 16:54 Start Date: 03/Sep/21 16:54 Worklog Time Spent: 10m Work Description: sunchao commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702041559 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: I think it's still good to have a dedicated test for this edge case. We can use `@Test(timeout=)` to check the timeout. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646379) Time Spent: 2h 10m (was: 2h) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
sunchao commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702041559 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: I think it's still good to have a dedicated test for this edge case. We can use `@Test(timeout=)` to check the timeout. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646372=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646372 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 16:36 Start Date: 03/Sep/21 16:36 Worklog Time Spent: 10m Work Description: viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702031490 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: `testGzipCodec` will cause timeout after removing `GzipOutputStream`. Because of this line: ```java `codecTest(conf, seed, 0, "org.apache.hadoop.io.compress.GzipCodec");` ``` It writes an empty input to the compress stream. Due to this `currentBufLen` check, `compress` will return 0 endlessly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646372) Time Spent: 1h 50m (was: 1h 40m) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646373=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646373 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 16:36 Start Date: 03/Sep/21 16:36 Worklog Time Spent: 10m Work Description: viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702031490 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: `testGzipCodec` will cause timeout after removing `GzipOutputStream`. Because of this line: ```java codecTest(conf, seed, 0, "org.apache.hadoop.io.compress.GzipCodec"); ``` It writes an empty input to the compress stream. Due to this `currentBufLen` check, `compress` will return 0 endlessly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646373) Time Spent: 2h (was: 1h 50m) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on a change in pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702031490 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: `testGzipCodec` will cause timeout after removing `GzipOutputStream`. Because of this line: ```java codecTest(conf, seed, 0, "org.apache.hadoop.io.compress.GzipCodec"); ``` It writes an empty input to the compress stream. Due to this `currentBufLen` check, `compress` will return 0 endlessly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] viirya commented on a change in pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
viirya commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702031490 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: `testGzipCodec` will cause timeout after removing `GzipOutputStream`. Because of this line: ```java `codecTest(conf, seed, 0, "org.apache.hadoop.io.compress.GzipCodec");` ``` It writes an empty input to the compress stream. Due to this `currentBufLen` check, `compress` will return 0 endlessly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17887) Remove GzipOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-17887?focusedWorklogId=646363=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646363 ] ASF GitHub Bot logged work on HADOOP-17887: --- Author: ASF GitHub Bot Created on: 03/Sep/21 16:10 Start Date: 03/Sep/21 16:10 Worklog Time Spent: 10m Work Description: sunchao commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702016242 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: Which test is failing before removing the condition? so it passes with the wrapper class but fails after? Also, we can remove the `currentBufLen` variable now since it is no longer used anywhere else. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646363) Time Spent: 1h 40m (was: 1.5h) > Remove GzipOutputStream > --- > > Key: HADOOP-17887 > URL: https://issues.apache.org/jira/browse/HADOOP-17887 > Project: Hadoop Common > Issue Type: Improvement >Reporter: L. C. Hsieh >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > As we provide built-in gzip compressor, we can use it in compressor stream. > The wrapper GzipOutputStream can be removed now. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #3377: HADOOP-17887. Remove the wrapper class GzipOutputStream
sunchao commented on a change in pull request #3377: URL: https://github.com/apache/hadoop/pull/3377#discussion_r702016242 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java ## @@ -86,10 +86,6 @@ public int compress(byte[] b, int off, int len) throws IOException { int compressedBytesWritten = 0; -if (currentBufLen <= 0) { - return compressedBytesWritten; -} - Review comment: Which test is failing before removing the condition? so it passes with the wrapper class but fails after? Also, we can remove the `currentBufLen` variable now since it is no longer used anywhere else. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3362: HDFS-16199. Resolve log placeholders in NamenodeBeanMetrics
hadoop-yetus commented on pull request #3362: URL: https://github.com/apache/hadoop/pull/3362#issuecomment-912633897 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 20m 14s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 56s | | trunk passed | | +1 :green_heart: | compile | 0m 50s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 41s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 36s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 35m 2s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 146m 1s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3362 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1b40c8ec6e4a 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 23b02a9d59d494705916338722ae437b4d8e3687 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/2/testReport/ | | Max. process+thread count | 2315 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[jira] [Work logged] (HADOOP-17864) ABFS: Fork AbfsHttpOperation to add alternate connection
[ https://issues.apache.org/jira/browse/HADOOP-17864?focusedWorklogId=646353=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646353 ] ASF GitHub Bot logged work on HADOOP-17864: --- Author: ASF GitHub Bot Created on: 03/Sep/21 15:19 Start Date: 03/Sep/21 15:19 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#issuecomment-912616274 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 15s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3335/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 5s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 76m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3335/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3335 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1ab14ef8e3f4 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / cedc9f6f145234a0563233b09c6b653dfed3ed08 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3335/2/testReport/ | | Max. process+thread count | 696 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3335: HADOOP-17864. ABFS: Make provision for adding additional connections type
hadoop-yetus commented on pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#issuecomment-912616274 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 5s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 15s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3335/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 5s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 76m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3335/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3335 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1ab14ef8e3f4 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / cedc9f6f145234a0563233b09c6b653dfed3ed08 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3335/2/testReport/ | | Max. process+thread count | 696 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3335/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to
[jira] [Updated] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17890: Labels: pull-request-available (was: ) > ABFS: Refactor HTTP request handling code > - > > Key: HADOOP-17890 > URL: https://issues.apache.org/jira/browse/HADOOP-17890 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Aims at Http request handling code refactoring. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17890) ABFS: Refactor HTTP request handling code
[ https://issues.apache.org/jira/browse/HADOOP-17890?focusedWorklogId=646350=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646350 ] ASF GitHub Bot logged work on HADOOP-17890: --- Author: ASF GitHub Bot Created on: 03/Sep/21 15:13 Start Date: 03/Sep/21 15:13 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#issuecomment-912612360 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 5s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 25s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 0s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 77m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3381 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ee6ce4f77d9f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / aff57f5b308c41e7a3e3b878b573e4ec222998a6 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/testReport/ | | Max. process+thread count | 548 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/console | | versions | git=2.25.1
[GitHub] [hadoop] hadoop-yetus commented on pull request #3381: HADOOP-17890. ABFS: Http request handling code refactoring
hadoop-yetus commented on pull request #3381: URL: https://github.com/apache/hadoop/pull/3381#issuecomment-912612360 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 5s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 25s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 0s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 77m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3381 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ee6ce4f77d9f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / aff57f5b308c41e7a3e3b878b573e4ec222998a6 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/testReport/ | | Max. process+thread count | 548 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3381/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service,
[GitHub] [hadoop] hadoop-yetus commented on pull request #3244: HDFS-16138. BlockReportProcessingThread exit doesnt print the acutal stack
hadoop-yetus commented on pull request #3244: URL: https://github.com/apache/hadoop/pull/3244#issuecomment-912590710 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 343m 39s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3244/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 436m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3244/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3244 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 048a4f8fdb03 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9eccce744e14d29ac566447e4451d5c58e3afe15 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3244/3/testReport/ | | Max. process+thread count | 2553 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3244/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3379: HDFS-16210. Add the option of refreshCallQueue to RouterAdmin
hadoop-yetus commented on pull request #3379: URL: https://github.com/apache/hadoop/pull/3379#issuecomment-912586249 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 49s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 41s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 20m 4s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 93m 27s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3379 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux daed326725d4 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / de4f45fa8fc6d9b4486654cb2bba0a2afb0624a1 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/2/testReport/ | | Max. process+thread count | 2634 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hadoop] JackWangCS opened a new pull request #3382: YARN-10919. Remove LeafQueue#scheduler field
JackWangCS opened a new pull request #3382: URL: https://github.com/apache/hadoop/pull/3382 ### Description of PR Remove LeafQueue#scheduler field, as it is the same object as AbstractCSQueue#csContext (from parent class). ### How was this patch tested? Only remove duplicated filed, no tests added. ### For code changes: - [Yes] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [No] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [No] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [No] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17864) ABFS: Fork AbfsHttpOperation to add alternate connection
[ https://issues.apache.org/jira/browse/HADOOP-17864?focusedWorklogId=646321=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646321 ] ASF GitHub Bot logged work on HADOOP-17864: --- Author: ASF GitHub Bot Created on: 03/Sep/21 14:11 Start Date: 03/Sep/21 14:11 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701923387 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); +if (this.connection instanceof HttpsURLConnection) { + HttpsURLConnection secureConn = (HttpsURLConnection) this.connection; + SSLSocketFactory sslSocketFactory = DelegatingSSLSocketFactory.getDefaultFactory(); + if (sslSocketFactory != null) { +secureConn.setSSLSocketFactory(sslSocketFactory); + } +} + +this.connection.setConnectTimeout(getConnectTimeout()); +this.connection.setReadTimeout(getReadTimeout()); + +this.connection.setRequestMethod(method); + +for (AbfsHttpHeader header : requestHeaders) { + this.connection.setRequestProperty(header.getName(), header.getValue()); +} + } + + public HttpURLConnection getConnection() { +return connection; + } + + public ListResultSchema getListResultSchema() { +return listResultSchema; + } + + public String getResponseHeader(String httpHeader) { +return connection.getHeaderField(httpHeader); + } + + public void setHeader(String header, String value) { +this.getConnection().setRequestProperty(header, value); + } + + public Map> getRequestHeaders() { +return getConnection().getRequestProperties(); + } + + public String getRequestHeader(String header) { +return getConnection().getRequestProperty(header); + } + + public String getClientRequestId() { +return this.connection +.getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID); + } + /** + * Sends the HTTP request. Note that HttpUrlConnection requires that an + * empty buffer be sent in order to set the "Content-Length: 0" header, which + * is required by our endpoint. + * + * @param buffer the request entity body. + * @param offset an offset into the buffer where the data
[GitHub] [hadoop] snvijaya commented on a change in pull request #3335: HADOOP-17864. ABFS: Make provision for adding additional connections type
snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701923387 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); +if (this.connection instanceof HttpsURLConnection) { + HttpsURLConnection secureConn = (HttpsURLConnection) this.connection; + SSLSocketFactory sslSocketFactory = DelegatingSSLSocketFactory.getDefaultFactory(); + if (sslSocketFactory != null) { +secureConn.setSSLSocketFactory(sslSocketFactory); + } +} + +this.connection.setConnectTimeout(getConnectTimeout()); +this.connection.setReadTimeout(getReadTimeout()); + +this.connection.setRequestMethod(method); + +for (AbfsHttpHeader header : requestHeaders) { + this.connection.setRequestProperty(header.getName(), header.getValue()); +} + } + + public HttpURLConnection getConnection() { +return connection; + } + + public ListResultSchema getListResultSchema() { +return listResultSchema; + } + + public String getResponseHeader(String httpHeader) { +return connection.getHeaderField(httpHeader); + } + + public void setHeader(String header, String value) { +this.getConnection().setRequestProperty(header, value); + } + + public Map> getRequestHeaders() { +return getConnection().getRequestProperties(); + } + + public String getRequestHeader(String header) { +return getConnection().getRequestProperty(header); + } + + public String getClientRequestId() { +return this.connection +.getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID); + } + /** + * Sends the HTTP request. Note that HttpUrlConnection requires that an + * empty buffer be sent in order to set the "Content-Length: 0" header, which + * is required by our endpoint. + * + * @param buffer the request entity body. + * @param offset an offset into the buffer where the data beings. + * @param length the length of the data in the buffer. + * + * @throws IOException if an error occurs. + */ + public void sendRequest(byte[] buffer, int offset, int length) throws IOException { +this.connection.setDoOutput(true); +this.connection.setFixedLengthStreamingMode(length); +if (buffer == null) { + // An empty buffer is sent to set the "Content-Length: 0" header, which + // is
[jira] [Work logged] (HADOOP-17864) ABFS: Fork AbfsHttpOperation to add alternate connection
[ https://issues.apache.org/jira/browse/HADOOP-17864?focusedWorklogId=646320=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646320 ] ASF GitHub Bot logged work on HADOOP-17864: --- Author: ASF GitHub Bot Created on: 03/Sep/21 14:08 Start Date: 03/Sep/21 14:08 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701921089 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); +if (this.connection instanceof HttpsURLConnection) { + HttpsURLConnection secureConn = (HttpsURLConnection) this.connection; + SSLSocketFactory sslSocketFactory = DelegatingSSLSocketFactory.getDefaultFactory(); + if (sslSocketFactory != null) { +secureConn.setSSLSocketFactory(sslSocketFactory); + } +} + +this.connection.setConnectTimeout(getConnectTimeout()); +this.connection.setReadTimeout(getReadTimeout()); + +this.connection.setRequestMethod(method); + +for (AbfsHttpHeader header : requestHeaders) { + this.connection.setRequestProperty(header.getName(), header.getValue()); +} + } + + public HttpURLConnection getConnection() { +return connection; + } + + public ListResultSchema getListResultSchema() { +return listResultSchema; + } + + public String getResponseHeader(String httpHeader) { +return connection.getHeaderField(httpHeader); + } + + public void setHeader(String header, String value) { +this.getConnection().setRequestProperty(header, value); + } + + public Map> getRequestHeaders() { +return getConnection().getRequestProperties(); + } + + public String getRequestHeader(String header) { +return getConnection().getRequestProperty(header); + } + + public String getClientRequestId() { +return this.connection +.getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID); + } + /** + * Sends the HTTP request. Note that HttpUrlConnection requires that an + * empty buffer be sent in order to set the "Content-Length: 0" header, which + * is required by our endpoint. + * + * @param buffer the request entity body. + * @param offset an offset into the buffer where the data
[GitHub] [hadoop] snvijaya commented on a change in pull request #3335: HADOOP-17864. ABFS: Make provision for adding additional connections type
snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701921089 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); +if (this.connection instanceof HttpsURLConnection) { + HttpsURLConnection secureConn = (HttpsURLConnection) this.connection; + SSLSocketFactory sslSocketFactory = DelegatingSSLSocketFactory.getDefaultFactory(); + if (sslSocketFactory != null) { +secureConn.setSSLSocketFactory(sslSocketFactory); + } +} + +this.connection.setConnectTimeout(getConnectTimeout()); +this.connection.setReadTimeout(getReadTimeout()); + +this.connection.setRequestMethod(method); + +for (AbfsHttpHeader header : requestHeaders) { + this.connection.setRequestProperty(header.getName(), header.getValue()); +} + } + + public HttpURLConnection getConnection() { +return connection; + } + + public ListResultSchema getListResultSchema() { +return listResultSchema; + } + + public String getResponseHeader(String httpHeader) { +return connection.getHeaderField(httpHeader); + } + + public void setHeader(String header, String value) { +this.getConnection().setRequestProperty(header, value); + } + + public Map> getRequestHeaders() { +return getConnection().getRequestProperties(); + } + + public String getRequestHeader(String header) { +return getConnection().getRequestProperty(header); + } + + public String getClientRequestId() { +return this.connection +.getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID); + } + /** + * Sends the HTTP request. Note that HttpUrlConnection requires that an + * empty buffer be sent in order to set the "Content-Length: 0" header, which + * is required by our endpoint. + * + * @param buffer the request entity body. + * @param offset an offset into the buffer where the data beings. + * @param length the length of the data in the buffer. + * + * @throws IOException if an error occurs. + */ + public void sendRequest(byte[] buffer, int offset, int length) throws IOException { +this.connection.setDoOutput(true); +this.connection.setFixedLengthStreamingMode(length); +if (buffer == null) { + // An empty buffer is sent to set the "Content-Length: 0" header, which + // is
[jira] [Work logged] (HADOOP-17864) ABFS: Fork AbfsHttpOperation to add alternate connection
[ https://issues.apache.org/jira/browse/HADOOP-17864?focusedWorklogId=646318=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646318 ] ASF GitHub Bot logged work on HADOOP-17864: --- Author: ASF GitHub Bot Created on: 03/Sep/21 14:07 Start Date: 03/Sep/21 14:07 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701920531 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); +if (this.connection instanceof HttpsURLConnection) { + HttpsURLConnection secureConn = (HttpsURLConnection) this.connection; + SSLSocketFactory sslSocketFactory = DelegatingSSLSocketFactory.getDefaultFactory(); + if (sslSocketFactory != null) { +secureConn.setSSLSocketFactory(sslSocketFactory); + } +} + +this.connection.setConnectTimeout(getConnectTimeout()); +this.connection.setReadTimeout(getReadTimeout()); + +this.connection.setRequestMethod(method); + +for (AbfsHttpHeader header : requestHeaders) { + this.connection.setRequestProperty(header.getName(), header.getValue()); +} + } + + public HttpURLConnection getConnection() { +return connection; + } + + public ListResultSchema getListResultSchema() { +return listResultSchema; + } + + public String getResponseHeader(String httpHeader) { +return connection.getHeaderField(httpHeader); + } + + public void setHeader(String header, String value) { +this.getConnection().setRequestProperty(header, value); + } + + public Map> getRequestHeaders() { +return getConnection().getRequestProperties(); + } + + public String getRequestHeader(String header) { +return getConnection().getRequestProperty(header); + } + + public String getClientRequestId() { +return this.connection +.getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID); + } + /** + * Sends the HTTP request. Note that HttpUrlConnection requires that an + * empty buffer be sent in order to set the "Content-Length: 0" header, which + * is required by our endpoint. + * + * @param buffer the request entity body. + * @param offset an offset into the buffer where the data
[GitHub] [hadoop] snvijaya commented on a change in pull request #3335: HADOOP-17864. ABFS: Make provision for adding additional connections type
snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701920531 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); +if (this.connection instanceof HttpsURLConnection) { + HttpsURLConnection secureConn = (HttpsURLConnection) this.connection; + SSLSocketFactory sslSocketFactory = DelegatingSSLSocketFactory.getDefaultFactory(); + if (sslSocketFactory != null) { +secureConn.setSSLSocketFactory(sslSocketFactory); + } +} + +this.connection.setConnectTimeout(getConnectTimeout()); +this.connection.setReadTimeout(getReadTimeout()); + +this.connection.setRequestMethod(method); + +for (AbfsHttpHeader header : requestHeaders) { + this.connection.setRequestProperty(header.getName(), header.getValue()); +} + } + + public HttpURLConnection getConnection() { +return connection; + } + + public ListResultSchema getListResultSchema() { +return listResultSchema; + } + + public String getResponseHeader(String httpHeader) { +return connection.getHeaderField(httpHeader); + } + + public void setHeader(String header, String value) { +this.getConnection().setRequestProperty(header, value); + } + + public Map> getRequestHeaders() { +return getConnection().getRequestProperties(); + } + + public String getRequestHeader(String header) { +return getConnection().getRequestProperty(header); + } + + public String getClientRequestId() { +return this.connection +.getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID); + } + /** + * Sends the HTTP request. Note that HttpUrlConnection requires that an + * empty buffer be sent in order to set the "Content-Length: 0" header, which + * is required by our endpoint. + * + * @param buffer the request entity body. + * @param offset an offset into the buffer where the data beings. + * @param length the length of the data in the buffer. + * + * @throws IOException if an error occurs. + */ + public void sendRequest(byte[] buffer, int offset, int length) throws IOException { +this.connection.setDoOutput(true); +this.connection.setFixedLengthStreamingMode(length); +if (buffer == null) { + // An empty buffer is sent to set the "Content-Length: 0" header, which + // is
[jira] [Work logged] (HADOOP-17864) ABFS: Fork AbfsHttpOperation to add alternate connection
[ https://issues.apache.org/jira/browse/HADOOP-17864?focusedWorklogId=646315=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646315 ] ASF GitHub Bot logged work on HADOOP-17864: --- Author: ASF GitHub Bot Created on: 03/Sep/21 14:03 Start Date: 03/Sep/21 14:03 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701917138 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); +if (this.connection instanceof HttpsURLConnection) { + HttpsURLConnection secureConn = (HttpsURLConnection) this.connection; + SSLSocketFactory sslSocketFactory = DelegatingSSLSocketFactory.getDefaultFactory(); + if (sslSocketFactory != null) { +secureConn.setSSLSocketFactory(sslSocketFactory); + } +} + +this.connection.setConnectTimeout(getConnectTimeout()); +this.connection.setReadTimeout(getReadTimeout()); + +this.connection.setRequestMethod(method); + +for (AbfsHttpHeader header : requestHeaders) { + this.connection.setRequestProperty(header.getName(), header.getValue()); +} + } + + public HttpURLConnection getConnection() { +return connection; + } + + public ListResultSchema getListResultSchema() { +return listResultSchema; + } + + public String getResponseHeader(String httpHeader) { +return connection.getHeaderField(httpHeader); + } + + public void setHeader(String header, String value) { +this.getConnection().setRequestProperty(header, value); + } + + public Map> getRequestHeaders() { +return getConnection().getRequestProperties(); + } + + public String getRequestHeader(String header) { +return getConnection().getRequestProperty(header); + } + + public String getClientRequestId() { +return this.connection +.getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID); + } + /** + * Sends the HTTP request. Note that HttpUrlConnection requires that an + * empty buffer be sent in order to set the "Content-Length: 0" header, which + * is required by our endpoint. + * + * @param buffer the request entity body. + * @param offset an offset into the buffer where the data
[GitHub] [hadoop] snvijaya commented on a change in pull request #3335: HADOOP-17864. ABFS: Make provision for adding additional connections type
snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701917138 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); +if (this.connection instanceof HttpsURLConnection) { + HttpsURLConnection secureConn = (HttpsURLConnection) this.connection; + SSLSocketFactory sslSocketFactory = DelegatingSSLSocketFactory.getDefaultFactory(); + if (sslSocketFactory != null) { +secureConn.setSSLSocketFactory(sslSocketFactory); + } +} + +this.connection.setConnectTimeout(getConnectTimeout()); +this.connection.setReadTimeout(getReadTimeout()); + +this.connection.setRequestMethod(method); + +for (AbfsHttpHeader header : requestHeaders) { + this.connection.setRequestProperty(header.getName(), header.getValue()); +} + } + + public HttpURLConnection getConnection() { +return connection; + } + + public ListResultSchema getListResultSchema() { +return listResultSchema; + } + + public String getResponseHeader(String httpHeader) { +return connection.getHeaderField(httpHeader); + } + + public void setHeader(String header, String value) { +this.getConnection().setRequestProperty(header, value); + } + + public Map> getRequestHeaders() { +return getConnection().getRequestProperties(); + } + + public String getRequestHeader(String header) { +return getConnection().getRequestProperty(header); + } + + public String getClientRequestId() { +return this.connection +.getRequestProperty(HttpHeaderConfigurations.X_MS_CLIENT_REQUEST_ID); + } + /** + * Sends the HTTP request. Note that HttpUrlConnection requires that an + * empty buffer be sent in order to set the "Content-Length: 0" header, which + * is required by our endpoint. + * + * @param buffer the request entity body. + * @param offset an offset into the buffer where the data beings. + * @param length the length of the data in the buffer. + * + * @throws IOException if an error occurs. + */ + public void sendRequest(byte[] buffer, int offset, int length) throws IOException { +this.connection.setDoOutput(true); +this.connection.setFixedLengthStreamingMode(length); +if (buffer == null) { + // An empty buffer is sent to set the "Content-Length: 0" header, which + // is
[jira] [Work logged] (HADOOP-17864) ABFS: Fork AbfsHttpOperation to add alternate connection
[ https://issues.apache.org/jira/browse/HADOOP-17864?focusedWorklogId=646314=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646314 ] ASF GitHub Bot logged work on HADOOP-17864: --- Author: ASF GitHub Bot Created on: 03/Sep/21 14:01 Start Date: 03/Sep/21 14:01 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701916259 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); Review comment: Done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646314) Time Spent: 1h (was: 50m) > ABFS: Fork AbfsHttpOperation to add alternate connection > > > Key: HADOOP-17864 > URL: https://issues.apache.org/jira/browse/HADOOP-17864 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > This Jira is to facilitate upcoming work as part of adding an alternate > connection : > [HADOOP-17853] ABFS: Enable optional store connectivity over azure specific > protocol for data egress - ASF JIRA (apache.org) > The scope of the change is to make AbfsHttpOperation as abstract class and > create a child class AbfsHttpConnection. Future connection types will be > added as child of AbfsHttpOperation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #3335: HADOOP-17864. ABFS: Make provision for adding additional connections type
snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701916259 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; +import org.codehaus.jackson.JsonParser; +import org.codehaus.jackson.JsonToken; +import org.codehaus.jackson.map.ObjectMapper; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.security.ssl.DelegatingSSLSocketFactory; +import org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants; +import org.apache.hadoop.fs.azurebfs.constants.HttpHeaderConfigurations; +import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema; + +public class AbfsHttpConnection extends AbfsHttpOperation { + private static final Logger LOG = LoggerFactory.getLogger(AbfsHttpOperation.class); + private HttpURLConnection connection; + private ListResultSchema listResultSchema = null; + + public AbfsHttpConnection(final URL url, + final String method, + List requestHeaders) throws IOException { +super(url, method); +init(method, requestHeaders); + } + + /** + * Initializes a new HTTP request and opens the connection. + * + * @param method The HTTP method (PUT, PATCH, POST, GET, HEAD, or DELETE). + * @param requestHeaders The HTTP request headers.READ_TIMEOUT + * + * @throws IOException if an error occurs. + */ + public void init(final String method, List requestHeaders) + throws IOException { +this.connection = openConnection(); Review comment: Done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17864) ABFS: Fork AbfsHttpOperation to add alternate connection
[ https://issues.apache.org/jira/browse/HADOOP-17864?focusedWorklogId=646312=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646312 ] ASF GitHub Bot logged work on HADOOP-17864: --- Author: ASF GitHub Bot Created on: 03/Sep/21 13:57 Start Date: 03/Sep/21 13:57 Worklog Time Spent: 10m Work Description: snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701913006 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; Review comment: ABFS Driver dependency on com.fasterxml.jackson.core was replaced with org.codehaus.jackson in [HADOOP-15659](https://issues.apache.org/jira/browse/HADOOP-15659). @DadanielZ , Could you please help with the reason for this ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646312) Time Spent: 50m (was: 40m) > ABFS: Fork AbfsHttpOperation to add alternate connection > > > Key: HADOOP-17864 > URL: https://issues.apache.org/jira/browse/HADOOP-17864 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > This Jira is to facilitate upcoming work as part of adding an alternate > connection : > [HADOOP-17853] ABFS: Enable optional store connectivity over azure specific > protocol for data egress - ASF JIRA (apache.org) > The scope of the change is to make AbfsHttpOperation as abstract class and > create a child class AbfsHttpConnection. Future connection types will be > added as child of AbfsHttpOperation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #3335: HADOOP-17864. ABFS: Make provision for adding additional connections type
snvijaya commented on a change in pull request #3335: URL: https://github.com/apache/hadoop/pull/3335#discussion_r701913006 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpConnection.java ## @@ -0,0 +1,367 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.DataInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.net.HttpURLConnection; +import java.net.URL; +import java.util.List; +import java.util.Map; + +import javax.net.ssl.HttpsURLConnection; +import javax.net.ssl.SSLSocketFactory; + +import org.codehaus.jackson.JsonFactory; Review comment: ABFS Driver dependency on com.fasterxml.jackson.core was replaced with org.codehaus.jackson in [HADOOP-15659](https://issues.apache.org/jira/browse/HADOOP-15659). @DadanielZ , Could you please help with the reason for this ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya opened a new pull request #3381: HADOOP-17872. ABFS: Http request handling code refactoring
snvijaya opened a new pull request #3381: URL: https://github.com/apache/hadoop/pull/3381 This commit aims to refactor the Http request handling code. ABFS driver tests were run with HNS and non-HNS storage accounts over combinations of authentication types - OAuth and SharedKey. Tests results will be updated in conversation tab with each PR iteration. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3374: HDFS-16091. WebHDFS should support getSnapshotDiffReportListing.
hadoop-yetus commented on pull request #3374: URL: https://github.com/apache/hadoop/pull/3374#issuecomment-912558267 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 17m 39s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 4s | | trunk passed | | +1 :green_heart: | compile | 5m 20s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 52s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 53s | | trunk passed | | +1 :green_heart: | javadoc | 2m 12s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 54s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 7m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 37s | | the patch passed | | +1 :green_heart: | compile | 5m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 5m 15s | [/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 3 new + 652 unchanged - 1 fixed = 655 total (was 653) | | +1 :green_heart: | compile | 4m 45s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 4m 45s | [/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 3 new + 631 unchanged - 1 fixed = 634 total (was 632) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 10s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 2 new + 258 unchanged - 0 fixed = 260 total (was 258) | | +1 :green_heart: | mvnsite | 2m 38s | | the patch passed | | +1 :green_heart: | javadoc | 1m 54s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 35s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 7m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 15s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 371m 53s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 32m 4s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 539m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests
[jira] [Work logged] (HADOOP-17872) ABFS: Refactor read flow to include ReadRequestParameter
[ https://issues.apache.org/jira/browse/HADOOP-17872?focusedWorklogId=646310=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646310 ] ASF GitHub Bot logged work on HADOOP-17872: --- Author: ASF GitHub Bot Created on: 03/Sep/21 13:54 Start Date: 03/Sep/21 13:54 Worklog Time Spent: 10m Work Description: snvijaya opened a new pull request #3381: URL: https://github.com/apache/hadoop/pull/3381 This commit aims to refactor the Http request handling code. ABFS driver tests were run with HNS and non-HNS storage accounts over combinations of authentication types - OAuth and SharedKey. Tests results will be updated in conversation tab with each PR iteration. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646310) Time Spent: 50m (was: 40m) > ABFS: Refactor read flow to include ReadRequestParameter > > > Key: HADOOP-17872 > URL: https://issues.apache.org/jira/browse/HADOOP-17872 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > This Jira is to facilitate upcoming work as part of adding an alternate > connection : > HADOOP-17853 ABFS: Enable optional store connectivity over azure specific > protocol for data egress - ASF JIRA (apache.org) > The scope of the change is to introduce a ReadRequestParameter that will > include the various inputs needed for the read request to AbfsClient class. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17890) ABFS: Refactor HTTP request handling code
Sneha Vijayarajan created HADOOP-17890: -- Summary: ABFS: Refactor HTTP request handling code Key: HADOOP-17890 URL: https://issues.apache.org/jira/browse/HADOOP-17890 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.4.0 Reporter: Sneha Vijayarajan Assignee: Sneha Vijayarajan Aims at Http request handling code refactoring. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #3362: HDFS-16199. Resolve log placeholders in NamenodeBeanMetrics
virajjasani commented on pull request #3362: URL: https://github.com/apache/hadoop/pull/3362#issuecomment-912527131 > It is debug log, Can we not just pass e as whole, rather than just printing the message, The trace might be more helpful while debugging? Nothing wrong with that, the only reason why I kept it `e.getMessage()` is because it was already in place, but yes since the error message is not getting printed anyways, let's keep entire stacktrace, sounds good. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3380: HDFS-16211.Complete some descriptions related to AuthToken.
hadoop-yetus commented on pull request #3380: URL: https://github.com/apache/hadoop/pull/3380#issuecomment-912521720 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 53s | | trunk passed | | +1 :green_heart: | compile | 24m 53s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 23m 21s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 44s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 56s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 25m 33s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 25m 33s | | the patch passed | | +1 :green_heart: | compile | 22m 47s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 22m 47s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 35s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 41s | | the patch passed | | +1 :green_heart: | javadoc | 0m 33s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 37s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 3m 44s | | hadoop-auth in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 186m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3380/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3380 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 4a5df42b4704 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e464f7136dbfa6bc1e3c49ff053b1493e07c3cb8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3380/1/testReport/ | | Max. process+thread count | 518 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-auth U: hadoop-common-project/hadoop-auth | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3380/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries
[GitHub] [hadoop] caneGuy closed pull request #936: YARN-9605.Add ZkConfiguredFailoverProxyProvider for RM HA
caneGuy closed pull request #936: URL: https://github.com/apache/hadoop/pull/936 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] caneGuy closed pull request #1713: YARN-9973: Make history file cleaner robust
caneGuy closed pull request #1713: URL: https://github.com/apache/hadoop/pull/1713 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] caneGuy closed pull request #1719: YARN-9978: Support show submit acl and admin acl for leaf queue
caneGuy closed pull request #1719: URL: https://github.com/apache/hadoop/pull/1719 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] caneGuy closed pull request #1720: YARN-9709: Support show applications when we submit to capacity scheduler with full queue path
caneGuy closed pull request #1720: URL: https://github.com/apache/hadoop/pull/1720 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3379: HDFS-16210. Add the option of refreshCallQueue to RouterAdmin
hadoop-yetus commented on pull request #3379: URL: https://github.com/apache/hadoop/pull/3379#issuecomment-912460294 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 37m 42s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 0m 51s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 47s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 54s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 49s | | the patch passed | | +1 :green_heart: | compile | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 47s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 24s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | +1 :green_heart: | mvnsite | 0m 44s | | the patch passed | | +1 :green_heart: | javadoc | 0m 39s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 47s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 58s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 23m 46s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 116m 7s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3379 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 0f01098e0c26 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 75b2b043a1c9b67ef2e9466a5646252a9d6b6dcb | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/1/testReport/ | | Max. process+thread count | 2631 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[jira] [Work logged] (HADOOP-17888) The error of Constant annotation in AzureNativeFileSystemStore.java
[ https://issues.apache.org/jira/browse/HADOOP-17888?focusedWorklogId=646257=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646257 ] ASF GitHub Bot logged work on HADOOP-17888: --- Author: ASF GitHub Bot Created on: 03/Sep/21 11:11 Start Date: 03/Sep/21 11:11 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3372: URL: https://github.com/apache/hadoop/pull/3372#issuecomment-912458317 aah, this one is a comment change only; if @guoxin12 doesn't do a run over the w/e then I'll do the merge. (This is why I like I like {@value} in javadocs...removes need to maintain the comments.) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646257) Time Spent: 40m (was: 0.5h) > The error of Constant annotation in AzureNativeFileSystemStore.java > > > Key: HADOOP-17888 > URL: https://issues.apache.org/jira/browse/HADOOP-17888 > Project: Hadoop Common > Issue Type: Improvement >Reporter: guoxin >Priority: Minor > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3372: HADOOP-17888. The error of Constant annotation in AzureNativeFileSystem…
steveloughran commented on pull request #3372: URL: https://github.com/apache/hadoop/pull/3372#issuecomment-912458317 aah, this one is a comment change only; if @guoxin12 doesn't do a run over the w/e then I'll do the merge. (This is why I like I like {@value} in javadocs...removes need to maintain the comments.) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3374: HDFS-16091. WebHDFS should support getSnapshotDiffReportListing.
hadoop-yetus commented on pull request #3374: URL: https://github.com/apache/hadoop/pull/3374#issuecomment-912458277 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 1s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 39s | | trunk passed | | +1 :green_heart: | compile | 5m 41s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 5m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 19s | | trunk passed | | +1 :green_heart: | javadoc | 2m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 14s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 7m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 51s | | the patch passed | | +1 :green_heart: | compile | 5m 45s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 5m 45s | [/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/2/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 3 new + 652 unchanged - 1 fixed = 655 total (was 653) | | +1 :green_heart: | compile | 4m 59s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 4m 59s | [/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/2/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 3 new + 631 unchanged - 1 fixed = 634 total (was 632) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 9s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/2/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 2 new + 230 unchanged - 0 fixed = 232 total (was 230) | | +1 :green_heart: | mvnsite | 2m 59s | | the patch passed | | +1 :green_heart: | javadoc | 2m 6s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 54s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 7m 48s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 30s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 243m 35s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 23m 43s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 399m 59s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base:
[GitHub] [hadoop] tomscut commented on pull request #3378: HDFS-16209. Set dfs.namenode.caching.enabled to false as default
tomscut commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912456375 > As @ayushtkn said, facing the same problem, [HDFS-13820](https://issues.apache.org/jira/browse/HDFS-13820) add ability to disable the feature, you can also set it false. > If you change the default value, it's an incompatible change, especially for upgrading(using this feature). Seem that it's not so good. Thanks @ferhui for your comments. Maybe we can add a release note for this change. For new users who may not know this feature(Centralized Cache Management) exists, but it already runs quietly in the background. I think it's not a very elegant way. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3378: HDFS-16209. Set dfs.namenode.caching.enabled to false as default
ferhui commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912438441 As @ayushtkn said, facing the same problem, HDFS-13820 add ability to disable the feature, you can also set it false. If you change the default value, it's an incompatible change, especially for upgrading(using this feature). Seem that it's not so good. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3378: HDFS-16209. Set dfs.namenode.caching.enabled to false as default
hadoop-yetus commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912427687 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 59s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 27s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 9s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 5s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 249m 5s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3378/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 345m 57s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.TestEnhancedByteBufferAccess | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestPmemCacheRecovery | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestCacheByPmemMappableBlockLoader | | | hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetCache | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3378/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3378 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml markdownlint | | uname | Linux 8f8fcb9e3d81 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9a6e363f49809c52e30667ddf6a97069c8c59cf2 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test
[GitHub] [hadoop] tasanuma commented on pull request #3354: HDFS-16194. Simplify the code with DatanodeID#getXferAddrWithHostname
tasanuma commented on pull request #3354: URL: https://github.com/apache/hadoop/pull/3354#issuecomment-912420849 @Hexiaoqiao Could you review it again, and merge it? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu opened a new pull request #3380: HDFS-16211.Complete some descriptions related to AuthToken.
jianghuazhu opened a new pull request #3380: URL: https://github.com/apache/hadoop/pull/3380 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on pull request #3368: HDFS-16204.Improve FSDirEncryptionZoneOp related parameter comments.
jianghuazhu commented on pull request #3368: URL: https://github.com/apache/hadoop/pull/3368#issuecomment-912411935 Thanks @ayushtkn for the comment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #3365: YARN-10931.Remove some invalid characters in NMClientAsyncImpl#ContainerState.
jianghuazhu commented on a change in pull request #3365: URL: https://github.com/apache/hadoop/pull/3365#discussion_r701757347 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java ## @@ -433,7 +433,7 @@ public void getContainerStatusAsync(ContainerId containerId, NodeId nodeId) { } protected enum ContainerState { -PREP, FAILED, RUNNING, DONE, +PREP, FAILED, RUNNING, DONE Review comment: Thanks @ayushtkn for the comment. I found that in most places in Hadoop projects, when enum is defined, the last element usually does not have any symbol, so I think the same style needs to be maintained here. This is my idea. Welcome to continue to communicate. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #3365: YARN-10931.Remove some invalid characters in NMClientAsyncImpl#ContainerState.
jianghuazhu commented on a change in pull request #3365: URL: https://github.com/apache/hadoop/pull/3365#discussion_r701757347 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java ## @@ -433,7 +433,7 @@ public void getContainerStatusAsync(ContainerId containerId, NodeId nodeId) { } protected enum ContainerState { -PREP, FAILED, RUNNING, DONE, +PREP, FAILED, RUNNING, DONE Review comment: Thanks @ayushtkn for the comment. I found that in most places in the Hadoop project, when enum is defined, the last element usually does not have any symbol, so I think it should be consistent here. This is my idea. Welcome to continue to communicate. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3354: HDFS-16194. Simplify the code with DatanodeID#getXferAddrWithHostname
tomscut commented on pull request #3354: URL: https://github.com/apache/hadoop/pull/3354#issuecomment-912408663 Those failed unit tests are unrelated to the change. @tasanuma Please take a look. Thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3366: HDFS-16203. Discover datanodes with unbalanced block pool usage by th…
hadoop-yetus commented on pull request #3366: URL: https://github.com/apache/hadoop/pull/3366#issuecomment-912402783 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | jshint | 0m 0s | | jshint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 57s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 27s | | trunk passed | | +1 :green_heart: | compile | 4m 54s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 34s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 21s | | trunk passed | | +1 :green_heart: | javadoc | 1m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 6s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 35s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 17s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 3s | | the patch passed | | +1 :green_heart: | compile | 4m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 4m 46s | | the patch passed | | +1 :green_heart: | compile | 4m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 4m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 5s | | hadoop-hdfs-project: The patch generated 0 new + 113 unchanged - 9 fixed = 113 total (was 122) | | +1 :green_heart: | mvnsite | 2m 3s | | the patch passed | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 55s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 20s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 231m 52s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 346m 8s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3366/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3366 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell jshint | | uname | Linux b2667f9ca18e 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 58f0a2e2f44bacf4b98d236228b185df34501ae4 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3366/2/testReport/ | | Max. process+thread count | 3200 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3366/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3354: HDFS-16194. Simplify the code with DatanodeID#getXferAddrWithHostname
hadoop-yetus commented on pull request #3354: URL: https://github.com/apache/hadoop/pull/3354#issuecomment-912393183 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 0s | | trunk passed | | +1 :green_heart: | compile | 5m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 47s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 54s | | trunk passed | | +1 :green_heart: | javadoc | 2m 13s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 56s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 7m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 35s | | the patch passed | | +1 :green_heart: | compile | 5m 9s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 5m 9s | | the patch passed | | +1 :green_heart: | compile | 4m 47s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 4m 47s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 7s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 38s | | the patch passed | | +1 :green_heart: | javadoc | 1m 53s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 36s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 7m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 15s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 333m 37s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3354/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 31m 33s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3354/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | | The patch does not generate ASF License warnings. | | | | 495m 30s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.mover.TestMover | | | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3354/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3354 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 81a761efa091 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 90e060c2d467bc9f500c6f0dc5ff17e90e16edc2 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
[GitHub] [hadoop] symious commented on pull request #3379: HDFS-16210. Add the option of refreshCallQueue to RouterAdmin
symious commented on pull request #3379: URL: https://github.com/apache/hadoop/pull/3379#issuecomment-912390205 @goiri Could you help to review this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] symious opened a new pull request #3379: HDFS-16210. Add the option of refreshCallQueue to RouterAdmin
symious opened a new pull request #3379: URL: https://github.com/apache/hadoop/pull/3379 ### Description of PR We enabled FairCallQueue to RouterRpcServer, but Router can not refreshCallQueue as NameNode does. This ticket is to enable the refreshCallQueue for Router so that we don't have to restart the Routers when updating FairCallQueue configurations. The option is not to refreshCallQueue to NameNodes, just trying to refresh the callQueue of Router itself. ### How was this patch tested? Unit test ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3378: HDFS-16209. Set dfs.namenode.caching.enabled to false as default
tomscut commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912349488 > [HDFS-13820](https://issues.apache.org/jira/browse/HDFS-13820), added this configuration to disable the feature, But still it was made to true by default, guess due to compatibility reasons. > Folks using the Cache feature would get impacted with this change, right? they have to now enable this explicitly. There was a proposal on on [HDFS-13820](https://issues.apache.org/jira/browse/HDFS-13820) > > ``` > Please implement a way to disable the CacheReplicationMonitor class if there are no paths specified. Adding the first cached path to the NameNode should kick off the CacheReplicationMonitor and when the last one is deleted, the CacheReplicationMonitor should be disabled again. > ``` > > Is something like this possible? Thanks @ayushtkn for your comments. I have also seen [HDFS-13820](https://issues.apache.org/jira/browse/HDFS-13820). But that feature(auto enable or auto disable) is not currently implemented. For new users who may not know this feature(Centralized Cache Management) exists, but it already runs quietly in the background, which incurs performance overhead. IMO, if we need to use this feature, it makes sense to turn it on and specify the path. What do you think? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #3365: YARN-10931.Remove some invalid characters in NMClientAsyncImpl#ContainerState.
ayushtkn commented on a change in pull request #3365: URL: https://github.com/apache/hadoop/pull/3365#discussion_r701687323 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java ## @@ -433,7 +433,7 @@ public void getContainerStatusAsync(ContainerId containerId, NodeId nodeId) { } protected enum ContainerState { -PREP, FAILED, RUNNING, DONE, +PREP, FAILED, RUNNING, DONE Review comment: out of curiosity, what harm is this comma causing? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #3378: HDFS-16209. Set dfs.namenode.caching.enabled to false as default
ayushtkn commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912338368 HDFS-13820, added this configuration to disable the feature, But still it was made to true by default, guess due to compatibility reasons. Folks using the Cache feature would get impacted with this change, right? they have to now enable this explicitly. There was a proposal on on HDFS-13820 ``` Please implement a way to disable the CacheReplicationMonitor class if there are no paths specified. Adding the first cached path to the NameNode should kick off the CacheReplicationMonitor and when the last one is deleted, the CacheReplicationMonitor should be disabled again. ``` Is something like this possible? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #3362: HDFS-16199. Resolve log placeholders in NamenodeBeanMetrics
virajjasani commented on pull request #3362: URL: https://github.com/apache/hadoop/pull/3362#issuecomment-912323906 FYI @aajisaka, Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17874) ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner
[ https://issues.apache.org/jira/browse/HADOOP-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409336#comment-17409336 ] Viraj Jasani commented on HADOOP-17874: --- Thank you [~aajisaka]!! > ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner > -- > > Key: HADOOP-17874 > URL: https://issues.apache.org/jira/browse/HADOOP-17874 > Project: Hadoop Common > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2, 3.2.4 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Even though we have explicit comments stating that we have thread-safe > replacement of terseExceptions and suppressedExceptions, in reality we don't > have it. As we can't guarantee only non-concurrent addition of Exceptions at > a time from any Server implementation, we should make this thread-safe. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3375: HDFS-16207. Remove NN logs stack trace for non-existent xattr query
hadoop-yetus commented on pull request #3375: URL: https://github.com/apache/hadoop/pull/3375#issuecomment-912311956 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 10s | | trunk passed | | +1 :green_heart: | compile | 5m 18s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 54s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 13s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 6m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 56s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 10s | | the patch passed | | +1 :green_heart: | compile | 5m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 5m 12s | | the patch passed | | +1 :green_heart: | compile | 4m 40s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 4m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 7s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 14s | | the patch passed | | +1 :green_heart: | javadoc | 1m 27s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 55s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 6m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 24s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 483m 8s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3375/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 605m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.TestLeaseRecovery | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeHdfsFileSystemContract | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3375/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3375 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 0d9ae6a717bb 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 092602a7d46cbfae1e017f5023d3b807c4c109cc | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions |
[GitHub] [hadoop] tomscut commented on pull request #3378: HDFS-16209. Set dfs.namenode.caching.enabled to false as default
tomscut commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912309108 Hi @ayushtkn , could you please also take a look. Thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17889) OBSFileSystem should support Snapshot operations
[ https://issues.apache.org/jira/browse/HADOOP-17889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409313#comment-17409313 ] Ayush Saxena commented on HADOOP-17889: --- Moved to Hadoop-Common, Out of curiosity, how do you plan to do this at the client level? Or does Object Store itself provide some functionality or support for snapshots? Otherwise it would be tough to implement such a thing at a FS layer > OBSFileSystem should support Snapshot operations > > > Key: HADOOP-17889 > URL: https://issues.apache.org/jira/browse/HADOOP-17889 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bhavik Patel >Priority: Major > > OBSFileSystem should support Snapshot operation like other files system. > CC: [~zhongjun] [~iwasakims] [~pbacsko] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-17889) OBSFileSystem should support Snapshot operations
[ https://issues.apache.org/jira/browse/HADOOP-17889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena moved HDFS-16030 to HADOOP-17889: -- Key: HADOOP-17889 (was: HDFS-16030) Project: Hadoop Common (was: Hadoop HDFS) > OBSFileSystem should support Snapshot operations > > > Key: HADOOP-17889 > URL: https://issues.apache.org/jira/browse/HADOOP-17889 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bhavik Patel >Priority: Major > > OBSFileSystem should support Snapshot operation like other files system. > CC: [~zhongjun] [~iwasakims] [~pbacsko] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #3367: HDFS-16202. Use constants HdfsClientConfigKeys.Failover.PREFIX instead of "dfs.client.failover."
ayushtkn merged pull request #3367: URL: https://github.com/apache/hadoop/pull/3367 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17888) The error of Constant annotation in AzureNativeFileSystemStore.java
[ https://issues.apache.org/jira/browse/HADOOP-17888?focusedWorklogId=646188=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646188 ] ASF GitHub Bot logged work on HADOOP-17888: --- Author: ASF GitHub Bot Created on: 03/Sep/21 06:29 Start Date: 03/Sep/21 06:29 Worklog Time Spent: 10m Work Description: ayushtkn commented on pull request #3372: URL: https://github.com/apache/hadoop/pull/3372#issuecomment-912291913 Have moved this from HDFS to Hadoop-Common, Though the changes look quite trivial, but as per the process you need to manually run the azure tests and confirm the result & end-point. https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-SubmittingpatchesagainstobjectstoressuchasAmazonS3,OpenStackSwiftandMicrosoftAzure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 646188) Time Spent: 0.5h (was: 20m) > The error of Constant annotation in AzureNativeFileSystemStore.java > > > Key: HADOOP-17888 > URL: https://issues.apache.org/jira/browse/HADOOP-17888 > Project: Hadoop Common > Issue Type: Improvement >Reporter: guoxin >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #3372: HADOOP-17888. The error of Constant annotation in AzureNativeFileSystem…
ayushtkn commented on pull request #3372: URL: https://github.com/apache/hadoop/pull/3372#issuecomment-912291913 Have moved this from HDFS to Hadoop-Common, Though the changes look quite trivial, but as per the process you need to manually run the azure tests and confirm the result & end-point. https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-SubmittingpatchesagainstobjectstoressuchasAmazonS3,OpenStackSwiftandMicrosoftAzure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17888) The error of Constant annotation in AzureNativeFileSystemStore.java
[ https://issues.apache.org/jira/browse/HADOOP-17888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409296#comment-17409296 ] Ayush Saxena commented on HADOOP-17888: --- The change isn't in HDFS, moved to Hadoop-Common > The error of Constant annotation in AzureNativeFileSystemStore.java > > > Key: HADOOP-17888 > URL: https://issues.apache.org/jira/browse/HADOOP-17888 > Project: Hadoop Common > Issue Type: Improvement >Reporter: guoxin >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-17888) The error of Constant annotation in AzureNativeFileSystemStore.java
[ https://issues.apache.org/jira/browse/HADOOP-17888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena moved HDFS-16206 to HADOOP-17888: -- Key: HADOOP-17888 (was: HDFS-16206) Project: Hadoop Common (was: Hadoop HDFS) > The error of Constant annotation in AzureNativeFileSystemStore.java > > > Key: HADOOP-17888 > URL: https://issues.apache.org/jira/browse/HADOOP-17888 > Project: Hadoop Common > Issue Type: Improvement >Reporter: guoxin >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3378: HDFS-16209. Set dfs.namenode.caching.enabled to false as default
tomscut commented on pull request #3378: URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912284182 @tasanuma @jojochuang @Hexiaoqiao @ferhui Please help review the change. Thanks a lot. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org