[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=543407&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543407 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 28/Jan/21 06:52 Start Date: 28/Jan/21 06:52 Worklog Time Spent: 10m Work Description: bilaharith commented on pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#issuecomment-768841035 Hi @steveloughran I have created a separate JIRA for IOStatistics collection and linked the same with HADOOP-17475. We will be picking the same afterwards. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 543407) Time Spent: 4h 40m (was: 4.5h) > Implement listStatusIterator > > > Key: HADOOP-17475 > URL: https://issues.apache.org/jira/browse/HADOOP-17475 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Priority: Major > Labels: pull-request-available > Time Spent: 4h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#issuecomment-768841035 Hi @steveloughran I have created a separate JIRA for IOStatistics collection and linked the same with HADOOP-17475. We will be picking the same afterwards. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17502) IOStatistics collection for listStatusIterator()
Bilahari T H created HADOOP-17502: - Summary: IOStatistics collection for listStatusIterator() Key: HADOOP-17502 URL: https://issues.apache.org/jira/browse/HADOOP-17502 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.4.0 Reporter: Bilahari T H Add IOStatistics collection for listStatusIterator -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=543405&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543405 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 28/Jan/21 06:46 Start Date: 28/Jan/21 06:46 Worklog Time Spent: 10m Work Description: bilaharith removed a comment on pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#issuecomment-760367373 > rs propagate the IOStatisticsSource interface, so when the innermost iterator collects cost/count of list calls, the stats will be visible to and collectable Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 543405) Time Spent: 4.5h (was: 4h 20m) > Implement listStatusIterator > > > Key: HADOOP-17475 > URL: https://issues.apache.org/jira/browse/HADOOP-17475 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Priority: Major > Labels: pull-request-available > Time Spent: 4.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith removed a comment on pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith removed a comment on pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#issuecomment-760367373 > rs propagate the IOStatisticsSource interface, so when the innermost iterator collects cost/count of list calls, the stats will be visible to and collectable Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] crossfire commented on a change in pull request #2657: HDFS-15795. Fix returning wrong checksum when reconstruction was fail…
crossfire commented on a change in pull request #2657: URL: https://github.com/apache/hadoop/pull/2657#discussion_r565850014 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockChecksumHelper.java ## @@ -503,6 +503,7 @@ void compute() throws IOException { } } catch (IOException e) { Review comment: It may be okay to just remove here instead of rethrowing exception because it is handled below too: https://github.com/apache/hadoop/blob/f8769e0f4b917d9fda8ff7a9fddb4d755d246a1e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java#L324 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15710) ABFS checkException to map 403 to AccessDeniedException
[ https://issues.apache.org/jira/browse/HADOOP-15710?focusedWorklogId=543400&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543400 ] ASF GitHub Bot logged work on HADOOP-15710: --- Author: ASF GitHub Bot Created on: 28/Jan/21 06:29 Start Date: 28/Jan/21 06:29 Worklog Time Spent: 10m Work Description: mehakmeet commented on a change in pull request #2648: URL: https://github.com/apache/hadoop/pull/2648#discussion_r565828388 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java ## @@ -114,4 +118,19 @@ public void testWithDifferentCustomTokenFetchRetry(int numOfRetries) throws Exce + ") done, does not match with fs.azure.custom.token.fetch.retry.count configured (" + numOfRetries + ")", RetryTestTokenProvider.reTryCount == numOfRetries); } -} \ No newline at end of file + + @Test + public void testPermissionDenied() throws Throwable { +final AzureBlobFileSystem fs = getFileSystem(); +assumeTrue(fs.getIsNamespaceEnabled()); +Path dir = new Path("testPermissionDenied"); +Path path = new Path(dir, "file"); +ContractTestUtils.writeTextFile(fs, path, "some text", true); +// no permissions +fs.setPermission(path, new FsPermission((short) 0)); +fs.setPermission(dir, new FsPermission((short) 0)); +intercept(AccessDeniedException.class, () -> +fs.delete(path, false)); +intercept(AccessDeniedException.class, () -> +ContractTestUtils.readUTF8(fs, path, -1)); + }} Review comment: Add a new line at the end of the file. ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java ## @@ -114,4 +118,19 @@ public void testWithDifferentCustomTokenFetchRetry(int numOfRetries) throws Exce + ") done, does not match with fs.azure.custom.token.fetch.retry.count configured (" + numOfRetries + ")", RetryTestTokenProvider.reTryCount == numOfRetries); } -} \ No newline at end of file + + @Test Review comment: JavaDocs or describe() for a better understanding of the test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 543400) Time Spent: 50m (was: 40m) > ABFS checkException to map 403 to AccessDeniedException > --- > > Key: HADOOP-15710 > URL: https://issues.apache.org/jira/browse/HADOOP-15710 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: HADOOP-15407 >Reporter: Steve Loughran >Assignee: Bilahari T H >Priority: Blocker > Labels: abfsactive, pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > when you can't auth to ABFS, you get a 403 exception back. This should be > translated into an access denied exception for better clarity/handling -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet commented on a change in pull request #2648: HADOOP-15710. ABFS checkException to map 403 to AccessDeniedException.
mehakmeet commented on a change in pull request #2648: URL: https://github.com/apache/hadoop/pull/2648#discussion_r565828388 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java ## @@ -114,4 +118,19 @@ public void testWithDifferentCustomTokenFetchRetry(int numOfRetries) throws Exce + ") done, does not match with fs.azure.custom.token.fetch.retry.count configured (" + numOfRetries + ")", RetryTestTokenProvider.reTryCount == numOfRetries); } -} \ No newline at end of file + + @Test + public void testPermissionDenied() throws Throwable { +final AzureBlobFileSystem fs = getFileSystem(); +assumeTrue(fs.getIsNamespaceEnabled()); +Path dir = new Path("testPermissionDenied"); +Path path = new Path(dir, "file"); +ContractTestUtils.writeTextFile(fs, path, "some text", true); +// no permissions +fs.setPermission(path, new FsPermission((short) 0)); +fs.setPermission(dir, new FsPermission((short) 0)); +intercept(AccessDeniedException.class, () -> +fs.delete(path, false)); +intercept(AccessDeniedException.class, () -> +ContractTestUtils.readUTF8(fs, path, -1)); + }} Review comment: Add a new line at the end of the file. ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java ## @@ -114,4 +118,19 @@ public void testWithDifferentCustomTokenFetchRetry(int numOfRetries) throws Exce + ") done, does not match with fs.azure.custom.token.fetch.retry.count configured (" + numOfRetries + ")", RetryTestTokenProvider.reTryCount == numOfRetries); } -} \ No newline at end of file + + @Test Review comment: JavaDocs or describe() for a better understanding of the test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2591: YARN-10561. Upgrade node.js to 10.23.1 and yarn to 1.22.5 in YARN application catalog webapp
hadoop-yetus commented on pull request #2591: URL: https://github.com/apache/hadoop/pull/2591#issuecomment-768821108 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 25m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 24s | | trunk passed | | +1 :green_heart: | compile | 22m 13s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 48s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | mvnsite | 1m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 96m 12s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 21m 32s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 21m 32s | | the patch passed | | +1 :green_heart: | compile | 18m 47s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | javac | 18m 47s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 42s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 4s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 54s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 53s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | _ Other Tests _ | | +1 :green_heart: | unit | 0m 16s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 0m 51s | | hadoop-yarn-applications-catalog-webapp in the patch passed. | | +1 :green_heart: | unit | 3m 32s | | hadoop-yarn-ui in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 193m 46s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2591/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2591 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux fbdbf72fd214 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f8769e0f4b9 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2591/2/testReport/ | | Max. process+thread count | 626 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2591/2/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated me
[GitHub] [hadoop] crossfire opened a new pull request #2657: HDFS-15795. Fix returning wrong checksum when reconstruction was fail…
crossfire opened a new pull request #2657: URL: https://github.com/apache/hadoop/pull/2657 …ed by exception. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra commented on pull request #2567: HDFS-15740. Add x-platform utilities
GauthamBanasandra commented on pull request #2567: URL: https://github.com/apache/hadoop/pull/2567#issuecomment-768764746 @aajisaka Could you please review my PR? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=543332&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543332 ] ASF GitHub Bot logged work on HADOOP-13327: --- Author: ASF GitHub Bot Created on: 28/Jan/21 02:43 Start Date: 28/Jan/21 02:43 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2587: URL: https://github.com/apache/hadoop/pull/2587#issuecomment-768757128 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 11 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 57s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 26s | | trunk passed | | +1 :green_heart: | compile | 27m 5s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 23m 20s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 4m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 6m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 34s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 18s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 5m 16s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 0m 53s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 10m 16s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 39s | | the patch passed | | +1 :green_heart: | compile | 19m 51s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 19m 52s | [/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/3/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 11 new + 2037 unchanged - 0 fixed = 2048 total (was 2037) | | +1 :green_heart: | compile | 17m 51s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | -1 :x: | javac | 17m 51s | [/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/3/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 11 new + 1930 unchanged - 0 fixed = 1941 total (was 1930) | | -0 :warning: | checkstyle | 3m 48s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/3/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 2 new + 182 unchanged - 7 fixed = 184 total (was 189) | | +1 :green_heart: | mvnsite | 5m 45s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/3/artifact/out/whitespace-eol.txt) | The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 7s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 50s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 28s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 5m 29s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | findbugs | 10m 49s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 17m 14s | | hadoop-common in the patch passed. | |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2587: HADOOP-13327 Output Stream Specification.
hadoop-yetus commented on pull request #2587: URL: https://github.com/apache/hadoop/pull/2587#issuecomment-768757128 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 11 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 57s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 26s | | trunk passed | | +1 :green_heart: | compile | 27m 5s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 23m 20s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 4m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 6m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 34s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 18s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 5m 16s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 0m 53s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 10m 16s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 39s | | the patch passed | | +1 :green_heart: | compile | 19m 51s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 19m 52s | [/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/3/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 11 new + 2037 unchanged - 0 fixed = 2048 total (was 2037) | | +1 :green_heart: | compile | 17m 51s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | -1 :x: | javac | 17m 51s | [/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/3/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 11 new + 1930 unchanged - 0 fixed = 1941 total (was 1930) | | -0 :warning: | checkstyle | 3m 48s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/3/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 2 new + 182 unchanged - 7 fixed = 184 total (was 189) | | +1 :green_heart: | mvnsite | 5m 45s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/3/artifact/out/whitespace-eol.txt) | The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 7s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 50s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 28s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 5m 29s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | findbugs | 10m 49s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 17m 14s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 38s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 190m 39s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 2m 39s | | hadoop-azure in the patch passed. | | +1 :green_heart: | unit | 1m 24s | | hadoop-azure-datalake in the patch passed. | | +1 :green_heart: | asflicense | 1m 6s | | The patch does not genera
[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=543269&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543269 ] ASF GitHub Bot logged work on HADOOP-13327: --- Author: ASF GitHub Bot Created on: 28/Jan/21 01:38 Start Date: 28/Jan/21 01:38 Worklog Time Spent: 10m Work Description: joshelser commented on pull request #2587: URL: https://github.com/apache/hadoop/pull/2587#issuecomment-768714175 > you are still going to need some robust storage, RAID-1+ or similar, but we are getting sync all the way through, and you can query the streams to make sure they say they support it -including local fs. Yep, for sure. This is a nice improvement. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 543269) Time Spent: 7h 40m (was: 7.5h) > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, > HADOOP-13327-branch-2-001.patch > > Time Spent: 7h 40m > Remaining Estimate: 0h > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] joshelser commented on pull request #2587: HADOOP-13327 Output Stream Specification.
joshelser commented on pull request #2587: URL: https://github.com/apache/hadoop/pull/2587#issuecomment-768714175 > you are still going to need some robust storage, RAID-1+ or similar, but we are getting sync all the way through, and you can query the streams to make sure they say they support it -including local fs. Yep, for sure. This is a nice improvement. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17501) Fix logging typo in ShutdownHookManager
[ https://issues.apache.org/jira/browse/HADOOP-17501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17273219#comment-17273219 ] Fengnan Li commented on HADOOP-17501: - Thanks [~shv] to report this. I will fix it. > Fix logging typo in ShutdownHookManager > --- > > Key: HADOOP-17501 > URL: https://issues.apache.org/jira/browse/HADOOP-17501 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Konstantin Shvachko >Assignee: Fengnan Li >Priority: Major > Labels: newbie > > Three log messages in {{ShutdownHookManager}} have a typo, saying > "ShutdownHookManger". Should be "ShutdownHookManager" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17501) Fix logging typo in ShutdownHookManager
[ https://issues.apache.org/jira/browse/HADOOP-17501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li reassigned HADOOP-17501: --- Assignee: Fengnan Li > Fix logging typo in ShutdownHookManager > --- > > Key: HADOOP-17501 > URL: https://issues.apache.org/jira/browse/HADOOP-17501 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Konstantin Shvachko >Assignee: Fengnan Li >Priority: Major > Labels: newbie > > Three log messages in {{ShutdownHookManager}} have a typo, saying > "ShutdownHookManger". Should be "ShutdownHookManager" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17501) Fix logging typo in ShutdownHookManager
[ https://issues.apache.org/jira/browse/HADOOP-17501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HADOOP-17501: - Description: Three log messages in {{ShutdownHookManager}} have a typo, saying "ShutdownHookManger". Should be "ShutdownHookManager" (was: Three log messages in {{ShutdownHookManager}} have a typo, saying "ShutdownHookManger". SHould be "ShutdownHookManager") > Fix logging typo in ShutdownHookManager > --- > > Key: HADOOP-17501 > URL: https://issues.apache.org/jira/browse/HADOOP-17501 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Konstantin Shvachko >Priority: Major > Labels: newbie > > Three log messages in {{ShutdownHookManager}} have a typo, saying > "ShutdownHookManger". Should be "ShutdownHookManager" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17501) Fix logging typo in ShutdownHookManager
Konstantin Shvachko created HADOOP-17501: Summary: Fix logging typo in ShutdownHookManager Key: HADOOP-17501 URL: https://issues.apache.org/jira/browse/HADOOP-17501 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Konstantin Shvachko Three log messages in {{ShutdownHookManager}} have a typo, saying "ShutdownHookManger". SHould be "ShutdownHookManager" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2650: HDFS-15790: Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2…
hadoop-yetus commented on pull request #2650: URL: https://github.com/apache/hadoop/pull/2650#issuecomment-768635544 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | buf | 0m 0s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 5 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 9s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 29s | | trunk passed | | +1 :green_heart: | compile | 20m 33s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 56s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 4m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 6m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 31s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 48s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 24s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 1m 26s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 11m 34s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 11s | | the patch passed | | +1 :green_heart: | compile | 20m 3s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 20m 3s | [/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2650/3/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 42 new + 370 unchanged - 42 fixed = 412 total (was 412) | | -1 :x: | javac | 20m 3s | [/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2650/3/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 1 new + 2035 unchanged - 0 fixed = 2036 total (was 2035) | | +1 :green_heart: | compile | 18m 1s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | -1 :x: | cc | 18m 1s | [/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2650/3/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 37 new + 375 unchanged - 37 fixed = 412 total (was 412) | | -1 :x: | javac | 18m 1s | [/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2650/3/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 1 new + 1930 unchanged - 0 fixed = 1931 total (was 1930) | | -0 :warning: | checkstyle | 3m 57s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2650/3/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 4 new + 557 unchanged - 3 fixed = 561 total (was 560) | | +1 :green_heart: | mvnsite | 6m 11s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 23s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 47s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 27s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | findbugs |
[jira] [Work logged] (HADOOP-17424) Replace HTrace with No-Op tracer
[ https://issues.apache.org/jira/browse/HADOOP-17424?focusedWorklogId=543157&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543157 ] ASF GitHub Bot logged work on HADOOP-17424: --- Author: ASF GitHub Bot Created on: 27/Jan/21 22:44 Start Date: 27/Jan/21 22:44 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2645: URL: https://github.com/apache/hadoop/pull/2645#issuecomment-768627815 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 28m 11s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 8 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 7s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 29s | | trunk passed | | +1 :green_heart: | compile | 22m 23s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 9s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 4m 21s | | trunk passed | | +1 :green_heart: | mvnsite | 6m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 27m 38s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 5m 30s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 24s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 0m 28s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 30s | | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | | branch/hadoop-client-modules/hadoop-client-api no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | | branch/hadoop-client-modules/hadoop-client-runtime no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 26s | | branch/hadoop-client-modules/hadoop-client-check-invariants no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | | branch/hadoop-client-modules/hadoop-client-minicluster no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | | branch/hadoop-client-modules/hadoop-client-check-test-invariants no findbugs output file (findbugsXml.xml) | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 15m 6s | | the patch passed | | +1 :green_heart: | compile | 21m 42s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 21m 42s | [/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2645/3/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 36 new + 376 unchanged - 36 fixed = 412 total (was 412) | | +1 :green_heart: | javac | 21m 42s | | the patch passed | | +1 :green_heart: | compile | 19m 4s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | -1 :x: | cc | 19m 4s | [/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2645/3/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 31 new + 381 unchanged - 31 fixed = 412 total (was 412) | | +1 :green_heart: | javac | 19m 4s | | the patch passed | | -0 :warning: | checkstyle | 4m 19s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2645/3/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 8 new + 1316 unchanged - 41 fixed = 1324 total (was 1357) | | +1 :green_heart: | mvnsite | 6m 51s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace is
[GitHub] [hadoop] hadoop-yetus commented on pull request #2645: HADOOP-17424. Replace HTrace with No-Op tracer
hadoop-yetus commented on pull request #2645: URL: https://github.com/apache/hadoop/pull/2645#issuecomment-768627815 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 28m 11s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 8 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 7s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 29s | | trunk passed | | +1 :green_heart: | compile | 22m 23s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 9s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 4m 21s | | trunk passed | | +1 :green_heart: | mvnsite | 6m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 27m 38s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 5m 30s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 24s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 0m 28s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 30s | | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | | branch/hadoop-client-modules/hadoop-client-api no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | | branch/hadoop-client-modules/hadoop-client-runtime no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 26s | | branch/hadoop-client-modules/hadoop-client-check-invariants no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | | branch/hadoop-client-modules/hadoop-client-minicluster no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | | branch/hadoop-client-modules/hadoop-client-check-test-invariants no findbugs output file (findbugsXml.xml) | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 15m 6s | | the patch passed | | +1 :green_heart: | compile | 21m 42s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 21m 42s | [/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2645/3/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 36 new + 376 unchanged - 36 fixed = 412 total (was 412) | | +1 :green_heart: | javac | 21m 42s | | the patch passed | | +1 :green_heart: | compile | 19m 4s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | -1 :x: | cc | 19m 4s | [/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2645/3/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 31 new + 381 unchanged - 31 fixed = 412 total (was 412) | | +1 :green_heart: | javac | 19m 4s | | the patch passed | | -0 :warning: | checkstyle | 4m 19s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2645/3/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 8 new + 1316 unchanged - 41 fixed = 1324 total (was 1357) | | +1 :green_heart: | mvnsite | 6m 51s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 9s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 39s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 5m 27s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 23s | | the patch passed with JDK Private
[jira] [Created] (HADOOP-17500) S3A doesn't calculate Content-MD5 on uploads
Pedro Tôrres created HADOOP-17500: - Summary: S3A doesn't calculate Content-MD5 on uploads Key: HADOOP-17500 URL: https://issues.apache.org/jira/browse/HADOOP-17500 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Reporter: Pedro Tôrres Hadoop doesn't specify the Content-MD5 of an object when uploading it to an S3 Bucket. This prevents uploads to buckets with Object Lock, that require the Content-MD5 to be specified. {code:java} com.amazonaws.services.s3.model.AmazonS3Exception: Content-MD5 HTTP header is required for Put Part requests with Object Lock parameters (Service: Amazon S3; Status Code: 400; Error Code: InvalidRequest; Request ID: ; S3 Extended Request ID: ; Proxy: null), S3 Extended Request ID: at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1372) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5248) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5195) at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3768) at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3753) at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:2230) at org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$uploadPart$8(WriteOperationHelper.java:558) at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110) ... 15 more{code} Similar to https://issues.apache.org/jira/browse/JCLOUDS-1549 Related to https://issues.apache.org/jira/browse/HADOOP-13076 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17482) Remove Commons Logger from FileSystem Class
[ https://issues.apache.org/jira/browse/HADOOP-17482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17273156#comment-17273156 ] Íñigo Goiri commented on HADOOP-17482: -- Yes, those test are clearly flaky, I'm fine with merging it. Approved there. > Remove Commons Logger from FileSystem Class > --- > > Key: HADOOP-17482 > URL: https://issues.apache.org/jira/browse/HADOOP-17482 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > > Remove reference to Commons Logger in FileSystem, it already has SLF4J, so > it's a bit weird to be mixing and matching and interweaving loggers in this > way. Also, my hope is to eventually migrate everything to SLF4J to simplify > things for downstream consumers of the common library. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17482) Remove Commons Logger from FileSystem Class
[ https://issues.apache.org/jira/browse/HADOOP-17482?focusedWorklogId=543103&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543103 ] ASF GitHub Bot logged work on HADOOP-17482: --- Author: ASF GitHub Bot Created on: 27/Jan/21 21:36 Start Date: 27/Jan/21 21:36 Worklog Time Spent: 10m Work Description: goiri commented on a change in pull request #2633: URL: https://github.com/apache/hadoop/pull/2633#discussion_r565650951 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java ## @@ -3391,15 +3391,7 @@ private static void loadFileSystems() { LOGGER.info("Full exception loading: {}", fs, e); } } catch (ServiceConfigurationError ee) { -LOG.warn("Cannot load filesystem: " + ee); -Throwable cause = ee.getCause(); -// print all the nested exception messages -while (cause != null) { - LOG.warn(cause.toString()); - cause = cause.getCause(); -} -// and at debug: the full stack -LOG.debug("Stack Trace", ee); +LOGGER.warn("Cannot load filesystem", ee); Review comment: I think relying on slf4j and just dumping the whole exception is good. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 543103) Time Spent: 3h 40m (was: 3.5h) > Remove Commons Logger from FileSystem Class > --- > > Key: HADOOP-17482 > URL: https://issues.apache.org/jira/browse/HADOOP-17482 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > > Remove reference to Commons Logger in FileSystem, it already has SLF4J, so > it's a bit weird to be mixing and matching and interweaving loggers in this > way. Also, my hope is to eventually migrate everything to SLF4J to simplify > things for downstream consumers of the common library. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #2633: HADOOP-17482: Remove Commons Logger from FileSystem Class
goiri commented on a change in pull request #2633: URL: https://github.com/apache/hadoop/pull/2633#discussion_r565650951 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java ## @@ -3391,15 +3391,7 @@ private static void loadFileSystems() { LOGGER.info("Full exception loading: {}", fs, e); } } catch (ServiceConfigurationError ee) { -LOG.warn("Cannot load filesystem: " + ee); -Throwable cause = ee.getCause(); -// print all the nested exception messages -while (cause != null) { - LOG.warn(cause.toString()); - cause = cause.getCause(); -} -// and at debug: the full stack -LOG.debug("Stack Trace", ee); +LOGGER.warn("Cannot load filesystem", ee); Review comment: I think relying on slf4j and just dumping the whole exception is good. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17482) Remove Commons Logger from FileSystem Class
[ https://issues.apache.org/jira/browse/HADOOP-17482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17273142#comment-17273142 ] David Mollitor commented on HADOOP-17482: - [~elgoiri] Do you have any cycles to take a look at this request? The PR is fighting many flaky tests. Not sure how many times you want me to run this to get them to pass. > Remove Commons Logger from FileSystem Class > --- > > Key: HADOOP-17482 > URL: https://issues.apache.org/jira/browse/HADOOP-17482 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Labels: pull-request-available > Time Spent: 3.5h > Remaining Estimate: 0h > > Remove reference to Commons Logger in FileSystem, it already has SLF4J, so > it's a bit weird to be mixing and matching and interweaving loggers in this > way. Also, my hope is to eventually migrate everything to SLF4J to simplify > things for downstream consumers of the common library. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2567: HDFS-15740. Add x-platform utilities
hadoop-yetus commented on pull request #2567: URL: https://github.com/apache/hadoop/pull/2567#issuecomment-768532520 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 9s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 38s | | trunk passed | | +1 :green_heart: | compile | 22m 41s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 24s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | mvnsite | 26m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 118m 5s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 118m 26s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 21m 9s | | the patch passed | | +1 :green_heart: | compile | 23m 44s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 23m 44s | [/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2567/21/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 31 new + 381 unchanged - 31 fixed = 412 total (was 412) | | +1 :green_heart: | golang | 23m 44s | | the patch passed | | +1 :green_heart: | javac | 23m 44s | | the patch passed | | +1 :green_heart: | compile | 19m 41s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | -1 :x: | cc | 19m 41s | [/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2567/21/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 33 new + 379 unchanged - 33 fixed = 412 total (was 412) | | +1 :green_heart: | golang | 19m 41s | | the patch passed | | +1 :green_heart: | javac | 19m 41s | | the patch passed | | +1 :green_heart: | mvnsite | 21m 13s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 692m 58s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2567/21/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 33s | | The patch does not generate ASF License warnings. | | | | 916m 0s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.tools.dynamometer.TestDynamometerInfra | | | hadoop.yarn.service.TestYarnNativeServices | | | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.client.api.impl.TestAMRMClient | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2567/21/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2567 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit markdownlint golang | | uname | Linux 6e78f4f5775d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk /
[GitHub] [hadoop] hadoop-yetus commented on pull request #2567: HDFS-15740. Add x-platform utilities
hadoop-yetus commented on pull request #2567: URL: https://github.com/apache/hadoop/pull/2567#issuecomment-768528104 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 16s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 42s | | trunk passed | | +1 :green_heart: | compile | 22m 3s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 32s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | mvnsite | 26m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 118m 18s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 118m 39s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 21m 42s | | the patch passed | | +1 :green_heart: | compile | 23m 28s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 23m 28s | [/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2567/20/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 51 new + 361 unchanged - 51 fixed = 412 total (was 412) | | +1 :green_heart: | golang | 23m 28s | | the patch passed | | +1 :green_heart: | javac | 23m 28s | | the patch passed | | +1 :green_heart: | compile | 20m 2s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | -1 :x: | cc | 20m 2s | [/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2567/20/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 42 new + 370 unchanged - 42 fixed = 412 total (was 412) | | +1 :green_heart: | golang | 20m 2s | | the patch passed | | +1 :green_heart: | javac | 20m 2s | | the patch passed | | +1 :green_heart: | mvnsite | 21m 42s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 692m 47s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2567/20/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 33s | | The patch does not generate ASF License warnings. | | | | 916m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized | | | hadoop.tools.dynamometer.TestDynamometerInfra | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2567/20/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2567 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit markdownlint golang | | uname | Linux ffc376c3b72d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64
[jira] [Work logged] (HADOOP-17483) magic committer to be enabled for all S3 buckets
[ https://issues.apache.org/jira/browse/HADOOP-17483?focusedWorklogId=543022&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543022 ] ASF GitHub Bot logged work on HADOOP-17483: --- Author: ASF GitHub Bot Created on: 27/Jan/21 19:23 Start Date: 27/Jan/21 19:23 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2656: URL: https://github.com/apache/hadoop/pull/2656#issuecomment-768518659 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 34m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 8 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 8s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 31s | | trunk passed | | +1 :green_heart: | compile | 20m 31s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 44s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 3m 49s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 11s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 42s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 21s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 1m 17s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 34s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 26s | | the patch passed | | +1 :green_heart: | compile | 19m 47s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 19m 47s | | the patch passed | | +1 :green_heart: | compile | 17m 54s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | javac | 17m 54s | | the patch passed | | -0 :warning: | checkstyle | 3m 48s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2656/1/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 2 new + 28 unchanged - 0 fixed = 30 total (was 28) | | +1 :green_heart: | mvnsite | 2m 26s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 33s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 40s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 17s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | findbugs | 3m 48s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 17m 48s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 3s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 228m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2656/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2656 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux c4bfbc05866f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7c4ef428379 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | Multi-JDK versions | /usr/
[GitHub] [hadoop] hadoop-yetus commented on pull request #2656: HADOOP-17483. Magic committer is enabled by default.
hadoop-yetus commented on pull request #2656: URL: https://github.com/apache/hadoop/pull/2656#issuecomment-768518659 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 34m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 8 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 8s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 31s | | trunk passed | | +1 :green_heart: | compile | 20m 31s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 44s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 3m 49s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 11s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 42s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 21s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 1m 17s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 34s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 26s | | the patch passed | | +1 :green_heart: | compile | 19m 47s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 19m 47s | | the patch passed | | +1 :green_heart: | compile | 17m 54s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | javac | 17m 54s | | the patch passed | | -0 :warning: | checkstyle | 3m 48s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2656/1/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 2 new + 28 unchanged - 0 fixed = 30 total (was 28) | | +1 :green_heart: | mvnsite | 2m 26s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 33s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 40s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 17s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | findbugs | 3m 48s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 17m 48s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 3s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 228m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2656/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2656 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux c4bfbc05866f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7c4ef428379 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2656/1/testReport/ | | Max. process+thread count | 2891 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output |
[GitHub] [hadoop] HeartSaVioR commented on pull request #2624: MAPREDUCE-7317. Add latency information in FileOutputCommitter.mergePaths.
HeartSaVioR commented on pull request #2624: URL: https://github.com/apache/hadoop/pull/2624#issuecomment-768515963 Thanks for reviewing and merging! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #2624: MAPREDUCE-7317. Add latency information in FileOutputCommitter.mergePaths.
steveloughran merged pull request #2624: URL: https://github.com/apache/hadoop/pull/2624 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17483) magic committer to be enabled for all S3 buckets
[ https://issues.apache.org/jira/browse/HADOOP-17483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17483. - Fix Version/s: 3.3.1 Assignee: Steve Loughran Resolution: Fixed > magic committer to be enabled for all S3 buckets > > > Key: HADOOP-17483 > URL: https://issues.apache.org/jira/browse/HADOOP-17483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > now that S3 is consistent, there is no need to disable the magic committer > for safety. > remove option to enable magic committer (fs.s3a.committer.magic.enabled) and > the associated checks/probes through the code. > May want to retain the constants and probes just for completeness/API/CLI > consistency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17483) magic committer to be enabled for all S3 buckets
[ https://issues.apache.org/jira/browse/HADOOP-17483?focusedWorklogId=543011&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543011 ] ASF GitHub Bot logged work on HADOOP-17483: --- Author: ASF GitHub Bot Created on: 27/Jan/21 19:06 Start Date: 27/Jan/21 19:06 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2656: URL: https://github.com/apache/hadoop/pull/2656#issuecomment-768506035 thanks. merged to 3.3+ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 543011) Time Spent: 3h 20m (was: 3h 10m) > magic committer to be enabled for all S3 buckets > > > Key: HADOOP-17483 > URL: https://issues.apache.org/jira/browse/HADOOP-17483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 3h 20m > Remaining Estimate: 0h > > now that S3 is consistent, there is no need to disable the magic committer > for safety. > remove option to enable magic committer (fs.s3a.committer.magic.enabled) and > the associated checks/probes through the code. > May want to retain the constants and probes just for completeness/API/CLI > consistency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2656: HADOOP-17483. Magic committer is enabled by default.
steveloughran commented on pull request #2656: URL: https://github.com/apache/hadoop/pull/2656#issuecomment-768506035 thanks. merged to 3.3+ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17483) magic committer to be enabled for all S3 buckets
[ https://issues.apache.org/jira/browse/HADOOP-17483?focusedWorklogId=543007&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-543007 ] ASF GitHub Bot logged work on HADOOP-17483: --- Author: ASF GitHub Bot Created on: 27/Jan/21 19:04 Start Date: 27/Jan/21 19:04 Worklog Time Spent: 10m Work Description: steveloughran merged pull request #2656: URL: https://github.com/apache/hadoop/pull/2656 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 543007) Time Spent: 3h 10m (was: 3h) > magic committer to be enabled for all S3 buckets > > > Key: HADOOP-17483 > URL: https://issues.apache.org/jira/browse/HADOOP-17483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > now that S3 is consistent, there is no need to disable the magic committer > for safety. > remove option to enable magic committer (fs.s3a.committer.magic.enabled) and > the associated checks/probes through the code. > May want to retain the constants and probes just for completeness/API/CLI > consistency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #2656: HADOOP-17483. Magic committer is enabled by default.
steveloughran merged pull request #2656: URL: https://github.com/apache/hadoop/pull/2656 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Nargeshdb commented on a change in pull request #2652: HDFS-15791. Possible Resource Leak in FSImageFormatProtobuf
Nargeshdb commented on a change in pull request #2652: URL: https://github.com/apache/hadoop/pull/2652#discussion_r565486885 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java ## @@ -269,14 +269,20 @@ public InputStream getInputStreamForSection(FileSummary.Section section, String compressionCodec) throws IOException { FileInputStream fin = new FileInputStream(filename); - FileChannel channel = fin.getChannel(); - channel.position(section.getOffset()); - InputStream in = new BufferedInputStream(new LimitInputStream(fin, - section.getLength())); + try { Review comment: Thanks a lot for the review. I really appreciate it. I was wondering if I need to do anything else to get the change merged. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet
[ https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17273000#comment-17273000 ] Ahmed Hussein commented on HADOOP-17079: I suggest we approach this optimization in a different Manner. First Step: Instead of {{Set}}, we use {{Collection}}. Second Step: Optimize the code on the caller side. In that case, we won't need to re-implement the entire code to fit {{LinkedHashSet}}. > Optimize UGI#getGroups by adding UGI#getGroupsSet > - > > Key: HADOOP-17079 > URL: https://issues.apache.org/jira/browse/HADOOP-17079 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17079.002.patch, HADOOP-17079.003.patch, > HADOOP-17079.004.patch, HADOOP-17079.005.patch, HADOOP-17079.006.patch, > HADOOP-17079.007.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > UGI#getGroups has been optimized with HADOOP-13442 by avoiding the > List->Set->List conversion. However the returned list is not optimized to > contains lookup, especially the user's group membership list is huge > (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use > Set#contains() instead of List#contains() to speed up large group look up > while minimize List->Set conversions in Groups#getGroups() call. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17493) renaming S3A Statistic DELEGATION_TOKENS_ISSUED to DELEGATION_TOKEN_ISSUED broke tests downstream
[ https://issues.apache.org/jira/browse/HADOOP-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17493: Affects Version/s: (was: 3.3.0) 3.3.1 > renaming S3A Statistic DELEGATION_TOKENS_ISSUED to DELEGATION_TOKEN_ISSUED > broke tests downstream > - > > Key: HADOOP-17493 > URL: https://issues.apache.org/jira/browse/HADOOP-17493 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 1h > Remaining Estimate: 0h > > HADOOP-16830/HADOOP-17271 renamed DELEGATION_TOKENS_ISSUED to > DELEGATION_TOKENS_ISSUED while trying to unify naming. This breaks downstream > code. > Fix: revert back the name change. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17493) renaming S3A Statistic DELEGATION_TOKENS_ISSUED to DELEGATION_TOKEN_ISSUED broke tests downstream
[ https://issues.apache.org/jira/browse/HADOOP-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17493. - Fix Version/s: 3.3.1 Resolution: Fixed > renaming S3A Statistic DELEGATION_TOKENS_ISSUED to DELEGATION_TOKEN_ISSUED > broke tests downstream > - > > Key: HADOOP-17493 > URL: https://issues.apache.org/jira/browse/HADOOP-17493 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Time Spent: 1h > Remaining Estimate: 0h > > HADOOP-16830/HADOOP-17271 renamed DELEGATION_TOKENS_ISSUED to > DELEGATION_TOKENS_ISSUED while trying to unify naming. This breaks downstream > code. > Fix: revert back the name change. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17493) renaming S3A Statistic DELEGATION_TOKENS_ISSUED to DELEGATION_TOKEN_ISSUED broke tests downstream
[ https://issues.apache.org/jira/browse/HADOOP-17493?focusedWorklogId=542915&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542915 ] ASF GitHub Bot logged work on HADOOP-17493: --- Author: ASF GitHub Bot Created on: 27/Jan/21 16:39 Start Date: 27/Jan/21 16:39 Worklog Time Spent: 10m Work Description: steveloughran merged pull request #2649: URL: https://github.com/apache/hadoop/pull/2649 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542915) Time Spent: 1h (was: 50m) > renaming S3A Statistic DELEGATION_TOKENS_ISSUED to DELEGATION_TOKEN_ISSUED > broke tests downstream > - > > Key: HADOOP-17493 > URL: https://issues.apache.org/jira/browse/HADOOP-17493 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > HADOOP-16830/HADOOP-17271 renamed DELEGATION_TOKENS_ISSUED to > DELEGATION_TOKENS_ISSUED while trying to unify naming. This breaks downstream > code. > Fix: revert back the name change. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #2649: HADOOP-17493. Revert name of DELEGATION_TOKENS_ISSUED constant/statistic
steveloughran merged pull request #2649: URL: https://github.com/apache/hadoop/pull/2649 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17483) magic committer to be enabled for all S3 buckets
[ https://issues.apache.org/jira/browse/HADOOP-17483?focusedWorklogId=542894&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542894 ] ASF GitHub Bot logged work on HADOOP-17483: --- Author: ASF GitHub Bot Created on: 27/Jan/21 15:35 Start Date: 27/Jan/21 15:35 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2637: URL: https://github.com/apache/hadoop/pull/2637#issuecomment-768368611 Closing, superceded by #2656 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542894) Time Spent: 3h (was: 2h 50m) > magic committer to be enabled for all S3 buckets > > > Key: HADOOP-17483 > URL: https://issues.apache.org/jira/browse/HADOOP-17483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > now that S3 is consistent, there is no need to disable the magic committer > for safety. > remove option to enable magic committer (fs.s3a.committer.magic.enabled) and > the associated checks/probes through the code. > May want to retain the constants and probes just for completeness/API/CLI > consistency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17483) magic committer to be enabled for all S3 buckets
[ https://issues.apache.org/jira/browse/HADOOP-17483?focusedWorklogId=542892&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542892 ] ASF GitHub Bot logged work on HADOOP-17483: --- Author: ASF GitHub Bot Created on: 27/Jan/21 15:34 Start Date: 27/Jan/21 15:34 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2656: URL: https://github.com/apache/hadoop/pull/2656#issuecomment-768368118 Tested: S3 london `-Dparallel-tests -DtestsThreadCount=6 -Dmarkers=keep` Unbuffer test failure; also triggered a related failure in ITestS3AContractStreamIOStatistics Filed: https://issues.apache.org/jira/browse/HADOOP-17499 These are network buffer related; read() calls returning less than the full buffer. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542892) Time Spent: 2h 40m (was: 2.5h) > magic committer to be enabled for all S3 buckets > > > Key: HADOOP-17483 > URL: https://issues.apache.org/jira/browse/HADOOP-17483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > now that S3 is consistent, there is no need to disable the magic committer > for safety. > remove option to enable magic committer (fs.s3a.committer.magic.enabled) and > the associated checks/probes through the code. > May want to retain the constants and probes just for completeness/API/CLI > consistency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17483) magic committer to be enabled for all S3 buckets
[ https://issues.apache.org/jira/browse/HADOOP-17483?focusedWorklogId=542893&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542893 ] ASF GitHub Bot logged work on HADOOP-17483: --- Author: ASF GitHub Bot Created on: 27/Jan/21 15:34 Start Date: 27/Jan/21 15:34 Worklog Time Spent: 10m Work Description: steveloughran closed pull request #2637: URL: https://github.com/apache/hadoop/pull/2637 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542893) Time Spent: 2h 50m (was: 2h 40m) > magic committer to be enabled for all S3 buckets > > > Key: HADOOP-17483 > URL: https://issues.apache.org/jira/browse/HADOOP-17483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > now that S3 is consistent, there is no need to disable the magic committer > for safety. > remove option to enable magic committer (fs.s3a.committer.magic.enabled) and > the associated checks/probes through the code. > May want to retain the constants and probes just for completeness/API/CLI > consistency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2637: HADOOP-17483. Remove option to enable/disable S3A magic committer . It is always on
steveloughran commented on pull request #2637: URL: https://github.com/apache/hadoop/pull/2637#issuecomment-768368611 Closing, superceded by #2656 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #2637: HADOOP-17483. Remove option to enable/disable S3A magic committer . It is always on
steveloughran closed pull request #2637: URL: https://github.com/apache/hadoop/pull/2637 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2656: HADOOP-17483. Magic committer is enabled by default.
steveloughran commented on pull request #2656: URL: https://github.com/apache/hadoop/pull/2656#issuecomment-768368118 Tested: S3 london `-Dparallel-tests -DtestsThreadCount=6 -Dmarkers=keep` Unbuffer test failure; also triggered a related failure in ITestS3AContractStreamIOStatistics Filed: https://issues.apache.org/jira/browse/HADOOP-17499 These are network buffer related; read() calls returning less than the full buffer. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17483) magic committer to be enabled for all S3 buckets
[ https://issues.apache.org/jira/browse/HADOOP-17483?focusedWorklogId=542891&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542891 ] ASF GitHub Bot logged work on HADOOP-17483: --- Author: ASF GitHub Bot Created on: 27/Jan/21 15:33 Start Date: 27/Jan/21 15:33 Worklog Time Spent: 10m Work Description: steveloughran opened a new pull request #2656: URL: https://github.com/apache/hadoop/pull/2656 * core-default.xml updated * CommitConstants updated * All tests which previously enabled the magic commiter now rely on default settings. This helps make sure it is enabled. * Docs cover the switch, mention its enabled and explain why you may want to disable it. Change-Id: I40a24a34d519f412d5669ec7ca1de813ad071625 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542891) Time Spent: 2.5h (was: 2h 20m) > magic committer to be enabled for all S3 buckets > > > Key: HADOOP-17483 > URL: https://issues.apache.org/jira/browse/HADOOP-17483 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > now that S3 is consistent, there is no need to disable the magic committer > for safety. > remove option to enable magic committer (fs.s3a.committer.magic.enabled) and > the associated checks/probes through the code. > May want to retain the constants and probes just for completeness/API/CLI > consistency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #2656: HADOOP-17483. Magic committer is enabled by default.
steveloughran opened a new pull request #2656: URL: https://github.com/apache/hadoop/pull/2656 * core-default.xml updated * CommitConstants updated * All tests which previously enabled the magic commiter now rely on default settings. This helps make sure it is enabled. * Docs cover the switch, mention its enabled and explain why you may want to disable it. Change-Id: I40a24a34d519f412d5669ec7ca1de813ad071625 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17499) AbstractContractStreamIOStatisticsTest fails if read buffer not full
[ https://issues.apache.org/jira/browse/HADOOP-17499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272928#comment-17272928 ] Steve Loughran commented on HADOOP-17499: - {code} [ERROR] testInputStreamStatisticRead(org.apache.hadoop.fs.s3a.statistics.ITestS3AContractStreamIOStatistics) Time elapsed: 6.252 s <<< FAILURE! org.junit.ComparisonFailure: [Counter named stream_read_bytes with expected value 129] expected:<[129]L> but was:<[3]L> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticValue(IOStatisticAssertions.java:255) at org.apache.hadoop.fs.statistics.IOStatisticAssertions.verifyStatisticCounterValue(IOStatisticAssertions.java:173) at org.apache.hadoop.fs.contract.AbstractContractStreamIOStatisticsTest.verifyBytesRead(AbstractContractStreamIOStatisticsTest.java:283) at org.apache.hadoop.fs.contract.AbstractContractStreamIOStatisticsTest.testInputStreamStatisticRead(AbstractContractStreamIOStatisticsTest.java:226) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) {code} > AbstractContractStreamIOStatisticsTest fails if read buffer not full > > > Key: HADOOP-17499 > URL: https://issues.apache.org/jira/browse/HADOOP-17499 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > {code} > [ERROR] > testInputStreamStatisticRead(org.apache.hadoop.fs.s3a.statistics.ITestS3AContractStreamIOStatistics) > Time elapsed: 6.252 s <<< FAILURE! > org.junit.ComparisonFailure: [Counter named stream_read_bytes with expected > value 129] expected:<[129]L> but was:<[3]L> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > {code} > Test should handle all cases where bytes read > 0 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17499) AbstractContractStreamIOStatisticsTest fails if read buffer not full
Steve Loughran created HADOOP-17499: --- Summary: AbstractContractStreamIOStatisticsTest fails if read buffer not full Key: HADOOP-17499 URL: https://issues.apache.org/jira/browse/HADOOP-17499 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.3.1 Reporter: Steve Loughran Assignee: Steve Loughran {code} [ERROR] testInputStreamStatisticRead(org.apache.hadoop.fs.s3a.statistics.ITestS3AContractStreamIOStatistics) Time elapsed: 6.252 s <<< FAILURE! org.junit.ComparisonFailure: [Counter named stream_read_bytes with expected value 129] expected:<[129]L> but was:<[3]L> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) {code} Test should handle all cases where bytes read > 0 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2624: MAPREDUCE-7317. Add latency information in FileOutputCommitter.mergePaths.
hadoop-yetus commented on pull request #2624: URL: https://github.com/apache/hadoop/pull/2624#issuecomment-768295570 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 16s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 0m 34s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 38s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 1m 17s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 14s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 26s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 12m 56s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | findbugs | 1m 16s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 7m 0s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 80m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2624/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2624 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux dd8707cc185a 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 80c7404b519 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2624/4/testReport/ | | Max. process+thread count | 1559 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core U: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2624/4/console | | versions | git=2.25.1 maven=3.6.3 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respon
[GitHub] [hadoop] steveloughran commented on pull request #2624: MAPREDUCE-7317. Add latency information in FileOutputCommitter.mergePaths.
steveloughran commented on pull request #2624: URL: https://github.com/apache/hadoop/pull/2624#issuecomment-768283949 Yeah, that's good. we don't build the string unless its being logged, (and its only done at the start, not re-evaluated later), so keeping it efficient is nice. LGTM. +1 pending Yetus, being happy (and ignoring its complaints about tests) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542809&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542809 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 12:58 Start Date: 27/Jan/21 12:58 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#issuecomment-768268126 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 42m 17s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 24s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 21s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 1m 0s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | | trunk passed | | -0 :warning: | patch | 1m 19s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | -0 :warning: | checkstyle | 0m 18s | [/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2548/16/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 10s | | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 27s | [/diff-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2548/16/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 3 new + 15 unchanged - 2 fixed = 18 total (was 17) | | -1 :x: | javadoc | 0m 25s | [/diff-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2548/16/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 3 new + 15 unchanged - 2 fixed = 18 total (was 17) | | -1 :x: | findbugs | 0m 58s | [/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2548/16/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | _ Other Tests _ | | +1 :green_heart: | unit | 1m 55s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | T
[GitHub] [hadoop] hadoop-yetus commented on pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
hadoop-yetus commented on pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#issuecomment-768268126 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 42m 17s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 24s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 21s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +0 :ok: | spotbugs | 1m 0s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | | trunk passed | | -0 :warning: | patch | 1m 19s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | -0 :warning: | checkstyle | 0m 18s | [/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2548/16/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 10s | | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 27s | [/diff-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2548/16/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 3 new + 15 unchanged - 2 fixed = 18 total (was 17) | | -1 :x: | javadoc | 0m 25s | [/diff-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2548/16/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 generated 3 new + 15 unchanged - 2 fixed = 18 total (was 17) | | -1 :x: | findbugs | 0m 58s | [/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2548/16/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | _ Other Tests _ | | +1 :green_heart: | unit | 1m 55s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 115m 11s | | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-azure | | | Inconsistent synchronization of org.apache.hadoop.fs.azurebfs.services.AbfsListStatusRemoteIterator.continuation; locked 50% of time Unsynchronized access at AbfsListStatusRemoteIterator.java:50% of time Unsynchronized access at Abfs
[jira] [Work logged] (HADOOP-17424) Replace HTrace with No-Op tracer
[ https://issues.apache.org/jira/browse/HADOOP-17424?focusedWorklogId=542792&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542792 ] ASF GitHub Bot logged work on HADOOP-17424: --- Author: ASF GitHub Bot Created on: 27/Jan/21 12:26 Start Date: 27/Jan/21 12:26 Worklog Time Spent: 10m Work Description: smengcl commented on pull request #2645: URL: https://github.com/apache/hadoop/pull/2645#issuecomment-768251870 > @smengcl @iwasakims Thank you for the work here > I have taken your latest patch, compiled and trying to verify trace feature. > Just wanted to confirm, As per the PR, TraceAdminProtocols have been removed. Will these TraceAdminProtocol be handled in any follow-up JIRAs or will be removed permanently ? Hi @Sushmasree-28 , thanks for taking your time to check this PR. The current plan it to ditch the `TraceAdminProtocol` **completely**. As I previously (tentatively) implemented OpenTracing in https://github.com/apache/hadoop/pull/1846 , I don't find the legacy interface useful anymore (unless we want to maintain compatibility with HTrace -- which has become a potential security hazard -- and hence this PR). So I would just remove it here. Let me know if you have more concerns about it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542792) Time Spent: 6h 10m (was: 6h) > Replace HTrace with No-Op tracer > > > Key: HADOOP-17424 > URL: https://issues.apache.org/jira/browse/HADOOP-17424 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Time Spent: 6h 10m > Remaining Estimate: 0h > > Remove HTrace dependency as it is depending on old jackson jars. Use a no-op > tracer for now to eliminate potential security issues. > The plan is to move part of the code in > [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster > review. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on pull request #2645: HADOOP-17424. Replace HTrace with No-Op tracer
smengcl commented on pull request #2645: URL: https://github.com/apache/hadoop/pull/2645#issuecomment-768251870 > @smengcl @iwasakims Thank you for the work here > I have taken your latest patch, compiled and trying to verify trace feature. > Just wanted to confirm, As per the PR, TraceAdminProtocols have been removed. Will these TraceAdminProtocol be handled in any follow-up JIRAs or will be removed permanently ? Hi @Sushmasree-28 , thanks for taking your time to check this PR. The current plan it to ditch the `TraceAdminProtocol` **completely**. As I previously (tentatively) implemented OpenTracing in https://github.com/apache/hadoop/pull/1846 , I don't find the legacy interface useful anymore (unless we want to maintain compatibility with HTrace -- which has become a potential security hazard -- and hence this PR). So I would just remove it here. Let me know if you have more concerns about it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] HeartSaVioR commented on pull request #2624: MAPREDUCE-7317. Add latency information in FileOutputCommitter.mergePaths.
HeartSaVioR commented on pull request #2624: URL: https://github.com/apache/hadoop/pull/2624#issuecomment-768248565 Ah OK. Good to know. I didn't realize it prints two times for entrance and exit. Will fix. Probably I'll have to go back to use `from` instead of `from.getPath()` then. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17424) Replace HTrace with No-Op tracer
[ https://issues.apache.org/jira/browse/HADOOP-17424?focusedWorklogId=542775&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542775 ] ASF GitHub Bot logged work on HADOOP-17424: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:37 Start Date: 27/Jan/21 11:37 Worklog Time Spent: 10m Work Description: Sushmasree-28 commented on pull request #2645: URL: https://github.com/apache/hadoop/pull/2645#issuecomment-768226751 @smengcl @iwasakims Thank you for the work here I have taken your latest patch, compiled and trying to verify trace feature. Just wanted to confirm, As per the PR, TraceAdminProtocols have been removed. Will these TraceAdminProtocol be handled in any follow-up JIRAs or will be removed permanently ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542775) Time Spent: 6h (was: 5h 50m) > Replace HTrace with No-Op tracer > > > Key: HADOOP-17424 > URL: https://issues.apache.org/jira/browse/HADOOP-17424 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Time Spent: 6h > Remaining Estimate: 0h > > Remove HTrace dependency as it is depending on old jackson jars. Use a no-op > tracer for now to eliminate potential security issues. > The plan is to move part of the code in > [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster > review. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Sushmasree-28 commented on pull request #2645: HADOOP-17424. Replace HTrace with No-Op tracer
Sushmasree-28 commented on pull request #2645: URL: https://github.com/apache/hadoop/pull/2645#issuecomment-768226751 @smengcl @iwasakims Thank you for the work here I have taken your latest patch, compiled and trying to verify trace feature. Just wanted to confirm, As per the PR, TraceAdminProtocols have been removed. Will these TraceAdminProtocol be handled in any follow-up JIRAs or will be removed permanently ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542771&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542771 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:14 Start Date: 27/Jan/21 11:14 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565225564 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java ## @@ -37,6 +37,8 @@ import java.util.concurrent.Executors; import java.util.concurrent.Future; +import org.apache.hadoop.fs.RemoteIterator; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542771) Time Spent: 4h 10m (was: 4h) > Implement listStatusIterator > > > Key: HADOOP-17475 > URL: https://issues.apache.org/jira/browse/HADOOP-17475 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Priority: Major > Labels: pull-request-available > Time Spent: 4h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565225564 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java ## @@ -37,6 +37,8 @@ import java.util.concurrent.Executors; import java.util.concurrent.Future; +import org.apache.hadoop.fs.RemoteIterator; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542769&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542769 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:11 Start Date: 27/Jan/21 11:11 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565223843 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsListStatusIterator.java ## @@ -0,0 +1,339 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; + +import org.assertj.core.api.Assertions; +import org.junit.Test; +import org.mockito.Mockito; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.AbfsListStatusRemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.ListingSupport; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyBoolean; +import static org.mockito.ArgumentMatchers.anyList; +import static org.mockito.ArgumentMatchers.nullable; +import static org.mockito.Mockito.verify; + +/** + * Test ListStatusRemoteIterator operation. + */ +public class ITestAbfsListStatusIterator extends AbstractAbfsIntegrationTest { + + private static final int TEST_FILES_NUMBER = 1000; + + public ITestAbfsListStatusIterator() throws Exception { +super(); + } + + @Test + public void testListStatusRemoteIterator() throws Exception { +Path testDir = createTestDirectory(); +setPageSize(10); +final List fileNames = createFilesUnderDirectory(TEST_FILES_NUMBER, +testDir, "testListPath"); + +ListingSupport listngSupport = Mockito.spy(getFileSystem().getAbfsStore()); +RemoteIterator fsItr = new AbfsListStatusRemoteIterator( +getFileSystem().getFileStatus(testDir), listngSupport); +Assertions.assertThat(fsItr) +.describedAs("RemoteIterator should be instance of " ++ "AbfsListStatusRemoteIterator by default") +.isInstanceOf(AbfsListStatusRemoteIterator.class); +int itrCount = 0; +while (fsItr.hasNext()) { + FileStatus fileStatus = fsItr.next(); + String pathStr = fileStatus.getPath().toString(); + fileNames.remove(pathStr); + itrCount++; +} +Assertions.assertThat(itrCount) +.describedAs("Number of iterations should be equal to the files " ++ "created") +.isEqualTo(TEST_FILES_NUMBER); +Assertions.assertThat(fileNames.size()) +.describedAs("After removing every iterm found from the iterator, " ++ "there should be no more elements in the fileNames") +.isEqualTo(0); +verify(listngSupport, Mockito.atLeast(100)) +.listStatus(any(Path.class), nullable(String.class), +anyList(), anyBoolean(), +nullable(String.class)); + } + + @Test + public void testListStatusRemoteIteratorWithoutHasNext() throws Exception { +Path testDir = createTestDirectory(); +setPageSize(10); +final List fileNames = createFilesUnderDirectory(TEST_FILES_NUMBER, +testDir, "testListPath"); + +ListingSupport listngSupport = Mockito.spy(getFileSystem().getAbfsStore()); +RemoteIterator fsItr = new AbfsListStatusRemoteIterator( +getFileSystem().getFileStatus(testDir), listngSupport); +Assertions.assertThat(fsItr) +.describedAs("RemoteIterator should be instance of " ++ "AbfsListStatusRemoteIterator by default") +.isInstanceOf(AbfsListStatusRemoteIterator
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542768&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542768 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:11 Start Date: 27/Jan/21 11:11 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565223412 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue> iteratorsQueue; + private final Object asyncOpLock = new Object(); + + private volatile boolean isAsyncInProgress = false; + private boolean firstBatch = true; + private String continuation; + private Iterator currIterator; + private IOException ioException; + + public AbfsListStatusRemoteIterator(final FileStatus fileStatus, + final ListingSupport listingSupport) { +this.fileStatus = fileStatus; +this.listingSupport = listingSupport; +iteratorsQueue = new ArrayBlockingQueue<>(MAX_QUEUE_SIZE); +currIterator = Collections.emptyIterator(); +fetchBatchesAsync(); + } + + @Override + public boolean hasNext() throws IOException { +if (currIterator.hasNext()) { + return true; +} +updateCurrentIterator(); +return currIterator.hasNext(); + } + + @Override + public FileStatus next() throws IOException { +if (!this.hasNext()) { + throw new NoSuchElementException(); +} +return currIterator.next(); + } + + private void updateCurrentIterator() throws IOException { +fetchBatchesAsync(); +synchronized (this) { + if (iteratorsQueue.isEmpty()) { +if (ioException != null) { + throw ioException; +} +if (isListingComplete()) { + return; +} + } +} +try { + currIterator = iteratorsQueue.take(); + if (!currIterator.hasNext() && !isListingComplete()) { +updateCurrentIterator(); + } +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); +} + } + + private synchronized boolean isListingComplete() { +return !firstBatch && (continuation == null || continuation.isEmpty()); + } + + private void fetchBatchesAsync() { +if (isAsyncInProgress) { + return; +} +synchronized (asyncOpLock) { + if (isAsyncInProgress) { +return; + } + isAsyncInProgress = true; +} +CompletableFuture.runAsync(() -> asyncOp()); + } + + private void asyncOp() { +try { + while (!isListingComplete() && iteratorsQueue.size() <= MAX_QUEUE_SIZE) { +addNextBatchIteratorToQueue(); + } +} catch (IOException e) { + ioException = e; Review comment: Exceptions are also put into the queue. uture is not used. ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565223843 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsListStatusIterator.java ## @@ -0,0 +1,339 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; + +import org.assertj.core.api.Assertions; +import org.junit.Test; +import org.mockito.Mockito; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.AbfsListStatusRemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.ListingSupport; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyBoolean; +import static org.mockito.ArgumentMatchers.anyList; +import static org.mockito.ArgumentMatchers.nullable; +import static org.mockito.Mockito.verify; + +/** + * Test ListStatusRemoteIterator operation. + */ +public class ITestAbfsListStatusIterator extends AbstractAbfsIntegrationTest { + + private static final int TEST_FILES_NUMBER = 1000; + + public ITestAbfsListStatusIterator() throws Exception { +super(); + } + + @Test + public void testListStatusRemoteIterator() throws Exception { +Path testDir = createTestDirectory(); +setPageSize(10); +final List fileNames = createFilesUnderDirectory(TEST_FILES_NUMBER, +testDir, "testListPath"); + +ListingSupport listngSupport = Mockito.spy(getFileSystem().getAbfsStore()); +RemoteIterator fsItr = new AbfsListStatusRemoteIterator( +getFileSystem().getFileStatus(testDir), listngSupport); +Assertions.assertThat(fsItr) +.describedAs("RemoteIterator should be instance of " ++ "AbfsListStatusRemoteIterator by default") +.isInstanceOf(AbfsListStatusRemoteIterator.class); +int itrCount = 0; +while (fsItr.hasNext()) { + FileStatus fileStatus = fsItr.next(); + String pathStr = fileStatus.getPath().toString(); + fileNames.remove(pathStr); + itrCount++; +} +Assertions.assertThat(itrCount) +.describedAs("Number of iterations should be equal to the files " ++ "created") +.isEqualTo(TEST_FILES_NUMBER); +Assertions.assertThat(fileNames.size()) +.describedAs("After removing every iterm found from the iterator, " ++ "there should be no more elements in the fileNames") +.isEqualTo(0); +verify(listngSupport, Mockito.atLeast(100)) +.listStatus(any(Path.class), nullable(String.class), +anyList(), anyBoolean(), +nullable(String.class)); + } + + @Test + public void testListStatusRemoteIteratorWithoutHasNext() throws Exception { +Path testDir = createTestDirectory(); +setPageSize(10); +final List fileNames = createFilesUnderDirectory(TEST_FILES_NUMBER, +testDir, "testListPath"); + +ListingSupport listngSupport = Mockito.spy(getFileSystem().getAbfsStore()); +RemoteIterator fsItr = new AbfsListStatusRemoteIterator( +getFileSystem().getFileStatus(testDir), listngSupport); +Assertions.assertThat(fsItr) +.describedAs("RemoteIterator should be instance of " ++ "AbfsListStatusRemoteIterator by default") +.isInstanceOf(AbfsListStatusRemoteIterator.class); +int itrCount = 0; +for (int i = 0; i < TEST_FILES_NUMBER; i++) { + FileStatus fileStatus = fsItr.next(); + String pathStr = fileStatus.getPath().toString(); + fileNames.remove(pathStr); + itrCount++; +} +Assertions.assertThatThrownBy(() -> fsItr.next()) +.describedAs( +"next() should throw NoSuchElementException since next has been " ++ "called " +
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565223412 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue> iteratorsQueue; + private final Object asyncOpLock = new Object(); + + private volatile boolean isAsyncInProgress = false; + private boolean firstBatch = true; + private String continuation; + private Iterator currIterator; + private IOException ioException; + + public AbfsListStatusRemoteIterator(final FileStatus fileStatus, + final ListingSupport listingSupport) { +this.fileStatus = fileStatus; +this.listingSupport = listingSupport; +iteratorsQueue = new ArrayBlockingQueue<>(MAX_QUEUE_SIZE); +currIterator = Collections.emptyIterator(); +fetchBatchesAsync(); + } + + @Override + public boolean hasNext() throws IOException { +if (currIterator.hasNext()) { + return true; +} +updateCurrentIterator(); +return currIterator.hasNext(); + } + + @Override + public FileStatus next() throws IOException { +if (!this.hasNext()) { + throw new NoSuchElementException(); +} +return currIterator.next(); + } + + private void updateCurrentIterator() throws IOException { +fetchBatchesAsync(); +synchronized (this) { + if (iteratorsQueue.isEmpty()) { +if (ioException != null) { + throw ioException; +} +if (isListingComplete()) { + return; +} + } +} +try { + currIterator = iteratorsQueue.take(); + if (!currIterator.hasNext() && !isListingComplete()) { +updateCurrentIterator(); + } +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); +} + } + + private synchronized boolean isListingComplete() { +return !firstBatch && (continuation == null || continuation.isEmpty()); + } + + private void fetchBatchesAsync() { +if (isAsyncInProgress) { + return; +} +synchronized (asyncOpLock) { + if (isAsyncInProgress) { +return; + } + isAsyncInProgress = true; +} +CompletableFuture.runAsync(() -> asyncOp()); + } + + private void asyncOp() { +try { + while (!isListingComplete() && iteratorsQueue.size() <= MAX_QUEUE_SIZE) { +addNextBatchIteratorToQueue(); + } +} catch (IOException e) { + ioException = e; Review comment: Exceptions are also put into the queue. uture is not used. ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + *
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542767&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542767 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:10 Start Date: 27/Jan/21 11:10 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565222998 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsListStatusIterator.java ## @@ -0,0 +1,339 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; + +import org.assertj.core.api.Assertions; +import org.junit.Test; +import org.mockito.Mockito; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.AbfsListStatusRemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.ListingSupport; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyBoolean; +import static org.mockito.ArgumentMatchers.anyList; +import static org.mockito.ArgumentMatchers.nullable; +import static org.mockito.Mockito.verify; + +/** + * Test ListStatusRemoteIterator operation. + */ +public class ITestAbfsListStatusIterator extends AbstractAbfsIntegrationTest { + + private static final int TEST_FILES_NUMBER = 1000; + + public ITestAbfsListStatusIterator() throws Exception { +super(); + } + + @Test + public void testListStatusRemoteIterator() throws Exception { +Path testDir = createTestDirectory(); +setPageSize(10); +final List fileNames = createFilesUnderDirectory(TEST_FILES_NUMBER, +testDir, "testListPath"); + +ListingSupport listngSupport = Mockito.spy(getFileSystem().getAbfsStore()); Review comment: Here I want to mock so that I can verify on the number of times few of the internal methods called. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542767) Time Spent: 3h 40m (was: 3.5h) > Implement listStatusIterator > > > Key: HADOOP-17475 > URL: https://issues.apache.org/jira/browse/HADOOP-17475 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Priority: Major > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565222998 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsListStatusIterator.java ## @@ -0,0 +1,339 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.Future; + +import org.assertj.core.api.Assertions; +import org.junit.Test; +import org.mockito.Mockito; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.AbfsListStatusRemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.ListingSupport; + +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyBoolean; +import static org.mockito.ArgumentMatchers.anyList; +import static org.mockito.ArgumentMatchers.nullable; +import static org.mockito.Mockito.verify; + +/** + * Test ListStatusRemoteIterator operation. + */ +public class ITestAbfsListStatusIterator extends AbstractAbfsIntegrationTest { + + private static final int TEST_FILES_NUMBER = 1000; + + public ITestAbfsListStatusIterator() throws Exception { +super(); + } + + @Test + public void testListStatusRemoteIterator() throws Exception { +Path testDir = createTestDirectory(); +setPageSize(10); +final List fileNames = createFilesUnderDirectory(TEST_FILES_NUMBER, +testDir, "testListPath"); + +ListingSupport listngSupport = Mockito.spy(getFileSystem().getAbfsStore()); Review comment: Here I want to mock so that I can verify on the number of times few of the internal methods called. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542765&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542765 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:09 Start Date: 27/Jan/21 11:09 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565222324 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue> iteratorsQueue; + private final Object asyncOpLock = new Object(); + + private volatile boolean isAsyncInProgress = false; + private boolean firstBatch = true; + private String continuation; + private Iterator currIterator; + private IOException ioException; + + public AbfsListStatusRemoteIterator(final FileStatus fileStatus, + final ListingSupport listingSupport) { +this.fileStatus = fileStatus; +this.listingSupport = listingSupport; +iteratorsQueue = new ArrayBlockingQueue<>(MAX_QUEUE_SIZE); +currIterator = Collections.emptyIterator(); +fetchBatchesAsync(); + } + + @Override + public boolean hasNext() throws IOException { +if (currIterator.hasNext()) { + return true; +} +updateCurrentIterator(); +return currIterator.hasNext(); + } + + @Override + public FileStatus next() throws IOException { +if (!this.hasNext()) { + throw new NoSuchElementException(); +} +return currIterator.next(); + } + + private void updateCurrentIterator() throws IOException { +fetchBatchesAsync(); +synchronized (this) { + if (iteratorsQueue.isEmpty()) { +if (ioException != null) { + throw ioException; +} +if (isListingComplete()) { + return; +} + } +} +try { + currIterator = iteratorsQueue.take(); + if (!currIterator.hasNext() && !isListingComplete()) { +updateCurrentIterator(); + } +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); +} + } + + private synchronized boolean isListingComplete() { +return !firstBatch && (continuation == null || continuation.isEmpty()); + } + + private void fetchBatchesAsync() { +if (isAsyncInProgress) { + return; +} +synchronized (asyncOpLock) { + if (isAsyncInProgress) { +return; + } + isAsyncInProgress = true; +} +CompletableFuture.runAsync(() -> asyncOp()); + } + + private void asyncOp() { +try { + while (!isListingComplete() && iteratorsQueue.size() <= MAX_QUEUE_SIZE) { +addNextBatchIteratorToQueue(); + } +} catch (IOException e) { + ioException = e; + iteratorsQueue.offer(Collections.emptyIterator()); +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); +} finally { + synchronized (asyncOpLock) { +isAsyncInProgress = false; + } +} + } + +
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542764&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542764 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:09 Start Date: 27/Jan/21 11:09 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r56529 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue> iteratorsQueue; + private final Object asyncOpLock = new Object(); + + private volatile boolean isAsyncInProgress = false; + private boolean firstBatch = true; + private String continuation; + private Iterator currIterator; + private IOException ioException; + + public AbfsListStatusRemoteIterator(final FileStatus fileStatus, + final ListingSupport listingSupport) { +this.fileStatus = fileStatus; +this.listingSupport = listingSupport; +iteratorsQueue = new ArrayBlockingQueue<>(MAX_QUEUE_SIZE); +currIterator = Collections.emptyIterator(); +fetchBatchesAsync(); + } + + @Override + public boolean hasNext() throws IOException { +if (currIterator.hasNext()) { + return true; +} +updateCurrentIterator(); +return currIterator.hasNext(); + } + + @Override + public FileStatus next() throws IOException { +if (!this.hasNext()) { + throw new NoSuchElementException(); +} +return currIterator.next(); + } + + private void updateCurrentIterator() throws IOException { +fetchBatchesAsync(); +synchronized (this) { + if (iteratorsQueue.isEmpty()) { +if (ioException != null) { + throw ioException; +} +if (isListingComplete()) { + return; +} + } +} +try { + currIterator = iteratorsQueue.take(); + if (!currIterator.hasNext() && !isListingComplete()) { +updateCurrentIterator(); Review comment: Moved away fro recursion This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542764) Time Spent: 3h 20m (was: 3h 10m) > Implement listStatusIterator > > > Key: HADOOP-17475 > URL: https://issues.apache.org/jira/browse/HADOOP-17475 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Priority: Major > Labels: pull-request-available > Time Spent: 3h 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubsc
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r56529 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue> iteratorsQueue; + private final Object asyncOpLock = new Object(); + + private volatile boolean isAsyncInProgress = false; + private boolean firstBatch = true; + private String continuation; + private Iterator currIterator; + private IOException ioException; + + public AbfsListStatusRemoteIterator(final FileStatus fileStatus, + final ListingSupport listingSupport) { +this.fileStatus = fileStatus; +this.listingSupport = listingSupport; +iteratorsQueue = new ArrayBlockingQueue<>(MAX_QUEUE_SIZE); +currIterator = Collections.emptyIterator(); +fetchBatchesAsync(); + } + + @Override + public boolean hasNext() throws IOException { +if (currIterator.hasNext()) { + return true; +} +updateCurrentIterator(); +return currIterator.hasNext(); + } + + @Override + public FileStatus next() throws IOException { +if (!this.hasNext()) { + throw new NoSuchElementException(); +} +return currIterator.next(); + } + + private void updateCurrentIterator() throws IOException { +fetchBatchesAsync(); +synchronized (this) { + if (iteratorsQueue.isEmpty()) { +if (ioException != null) { + throw ioException; +} +if (isListingComplete()) { + return; +} + } +} +try { + currIterator = iteratorsQueue.take(); + if (!currIterator.hasNext() && !isListingComplete()) { +updateCurrentIterator(); Review comment: Moved away fro recursion This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565222324 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue> iteratorsQueue; + private final Object asyncOpLock = new Object(); + + private volatile boolean isAsyncInProgress = false; + private boolean firstBatch = true; + private String continuation; + private Iterator currIterator; + private IOException ioException; + + public AbfsListStatusRemoteIterator(final FileStatus fileStatus, + final ListingSupport listingSupport) { +this.fileStatus = fileStatus; +this.listingSupport = listingSupport; +iteratorsQueue = new ArrayBlockingQueue<>(MAX_QUEUE_SIZE); +currIterator = Collections.emptyIterator(); +fetchBatchesAsync(); + } + + @Override + public boolean hasNext() throws IOException { +if (currIterator.hasNext()) { + return true; +} +updateCurrentIterator(); +return currIterator.hasNext(); + } + + @Override + public FileStatus next() throws IOException { +if (!this.hasNext()) { + throw new NoSuchElementException(); +} +return currIterator.next(); + } + + private void updateCurrentIterator() throws IOException { +fetchBatchesAsync(); +synchronized (this) { + if (iteratorsQueue.isEmpty()) { +if (ioException != null) { + throw ioException; +} +if (isListingComplete()) { + return; +} + } +} +try { + currIterator = iteratorsQueue.take(); + if (!currIterator.hasNext() && !isListingComplete()) { +updateCurrentIterator(); + } +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); +} + } + + private synchronized boolean isListingComplete() { +return !firstBatch && (continuation == null || continuation.isEmpty()); + } + + private void fetchBatchesAsync() { +if (isAsyncInProgress) { + return; +} +synchronized (asyncOpLock) { + if (isAsyncInProgress) { +return; + } + isAsyncInProgress = true; +} +CompletableFuture.runAsync(() -> asyncOp()); + } + + private void asyncOp() { +try { + while (!isListingComplete() && iteratorsQueue.size() <= MAX_QUEUE_SIZE) { +addNextBatchIteratorToQueue(); + } +} catch (IOException e) { + ioException = e; + iteratorsQueue.offer(Collections.emptyIterator()); +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); +} finally { + synchronized (asyncOpLock) { +isAsyncInProgress = false; + } +} + } + + private synchronized void addNextBatchIteratorToQueue() Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542763&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542763 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:08 Start Date: 27/Jan/21 11:08 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565221875 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue> iteratorsQueue; + private final Object asyncOpLock = new Object(); + + private volatile boolean isAsyncInProgress = false; + private boolean firstBatch = true; Review comment: Done, introduced a new variable isIterationComplete. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542763) Time Spent: 3h 10m (was: 3h) > Implement listStatusIterator > > > Key: HADOOP-17475 > URL: https://issues.apache.org/jira/browse/HADOOP-17475 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542762&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542762 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:08 Start Date: 27/Jan/21 11:08 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565221339 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,156 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue iteratorsQueue; + + private volatile boolean isAsyncInProgress = false; + private boolean isIterationComplete = false; + private String continuation; + private Iterator currIterator; + + public AbfsListStatusRemoteIterator(final FileStatus fileStatus, + final ListingSupport listingSupport) { +this.fileStatus = fileStatus; +this.listingSupport = listingSupport; +iteratorsQueue = new ArrayBlockingQueue<>(MAX_QUEUE_SIZE); +currIterator = Collections.emptyIterator(); +fetchBatchesAsync(); + } + + @Override + public boolean hasNext() throws IOException { +if (currIterator.hasNext()) { + return true; +} +updateCurrentIterator(); +return currIterator.hasNext(); + } + + @Override + public FileStatus next() throws IOException { +if (!this.hasNext()) { + throw new NoSuchElementException(); +} +return currIterator.next(); + } + + private void updateCurrentIterator() throws IOException { +do { + currIterator = getNextIterator(); +} while (currIterator != null && !currIterator.hasNext() +&& !isIterationComplete); + } + + private Iterator getNextIterator() throws IOException { +fetchBatchesAsync(); +synchronized (this) { + if (iteratorsQueue.isEmpty() && isIterationComplete) { + return Collections.emptyIterator(); + } +} +try { + Object obj = iteratorsQueue.take(); + if(obj instanceof Iterator){ +return (Iterator) obj; + } + throw (IOException) obj; +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); + return Collections.emptyIterator(); +} + } + + private void fetchBatchesAsync() { +if (isAsyncInProgress) { + return; +} +synchronized (this) { + if (isAsyncInProgress) { +return; + } + isAsyncInProgress = true; +} +CompletableFuture.runAsync(() -> asyncOp()); + } + + private void asyncOp() { +try { + while (!isIterationComplete && iteratorsQueue.size() <= MAX_QUEUE_SIZE) { +addNextBatchIteratorToQueue(); + } +} catch (IOException e) { + try { +iteratorsQueue.put(e); + } catch (InterruptedException interruptedException) { +Thread.currentThread().interrupt(); +LOG.error("Thread got interrupted: {}", interruptedException); + } +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); +
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565221875 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,151 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue> iteratorsQueue; + private final Object asyncOpLock = new Object(); + + private volatile boolean isAsyncInProgress = false; + private boolean firstBatch = true; Review comment: Done, introduced a new variable isIterationComplete. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565221339 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsListStatusRemoteIterator.java ## @@ -0,0 +1,156 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.ArrayBlockingQueue; +import java.util.concurrent.CompletableFuture; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.RemoteIterator; + +public class AbfsListStatusRemoteIterator implements RemoteIterator { + + private static final Logger LOG = LoggerFactory + .getLogger(AbfsListStatusRemoteIterator.class); + + private static final boolean FETCH_ALL_FALSE = false; + private static final int MAX_QUEUE_SIZE = 10; + + private final FileStatus fileStatus; + private final ListingSupport listingSupport; + private final ArrayBlockingQueue iteratorsQueue; + + private volatile boolean isAsyncInProgress = false; + private boolean isIterationComplete = false; + private String continuation; + private Iterator currIterator; + + public AbfsListStatusRemoteIterator(final FileStatus fileStatus, + final ListingSupport listingSupport) { +this.fileStatus = fileStatus; +this.listingSupport = listingSupport; +iteratorsQueue = new ArrayBlockingQueue<>(MAX_QUEUE_SIZE); +currIterator = Collections.emptyIterator(); +fetchBatchesAsync(); + } + + @Override + public boolean hasNext() throws IOException { +if (currIterator.hasNext()) { + return true; +} +updateCurrentIterator(); +return currIterator.hasNext(); + } + + @Override + public FileStatus next() throws IOException { +if (!this.hasNext()) { + throw new NoSuchElementException(); +} +return currIterator.next(); + } + + private void updateCurrentIterator() throws IOException { +do { + currIterator = getNextIterator(); +} while (currIterator != null && !currIterator.hasNext() +&& !isIterationComplete); + } + + private Iterator getNextIterator() throws IOException { +fetchBatchesAsync(); +synchronized (this) { + if (iteratorsQueue.isEmpty() && isIterationComplete) { + return Collections.emptyIterator(); + } +} +try { + Object obj = iteratorsQueue.take(); + if(obj instanceof Iterator){ +return (Iterator) obj; + } + throw (IOException) obj; +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); + return Collections.emptyIterator(); +} + } + + private void fetchBatchesAsync() { +if (isAsyncInProgress) { + return; +} +synchronized (this) { + if (isAsyncInProgress) { +return; + } + isAsyncInProgress = true; +} +CompletableFuture.runAsync(() -> asyncOp()); + } + + private void asyncOp() { +try { + while (!isIterationComplete && iteratorsQueue.size() <= MAX_QUEUE_SIZE) { +addNextBatchIteratorToQueue(); + } +} catch (IOException e) { + try { +iteratorsQueue.put(e); + } catch (InterruptedException interruptedException) { +Thread.currentThread().interrupt(); +LOG.error("Thread got interrupted: {}", interruptedException); + } +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); + LOG.error("Thread got interrupted: {}", e); +} finally { + synchronized (this ) { +isAsyncInProgress = false; + } +} + } + + private void addNextBatchIteratorToQueue() + throws IOException, InterruptedException { +List fileStatuses = new ArrayList<>(); +continuation = listingSupport +.listStatus(fileStatus.getPath(), null, fileStatuses, FETCH_ALL_FALSE, +continuation); +iteratorsQueue.put(fileStatuses.iterato
[jira] [Work logged] (HADOOP-17475) Implement listStatusIterator
[ https://issues.apache.org/jira/browse/HADOOP-17475?focusedWorklogId=542761&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-542761 ] ASF GitHub Bot logged work on HADOOP-17475: --- Author: ASF GitHub Bot Created on: 27/Jan/21 11:06 Start Date: 27/Jan/21 11:06 Worklog Time Spent: 10m Work Description: bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565220614 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java ## @@ -45,6 +45,8 @@ import org.apache.commons.lang3.ArrayUtils; import org.apache.hadoop.fs.azurebfs.services.AbfsClient; import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.AbfsListStatusRemoteIterator; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 542761) Time Spent: 2h 50m (was: 2h 40m) > Implement listStatusIterator > > > Key: HADOOP-17475 > URL: https://issues.apache.org/jira/browse/HADOOP-17475 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on a change in pull request #2548: HADOOP-17475. ABFS: Implementing ListStatusRemoteIterator
bilaharith commented on a change in pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#discussion_r565220614 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java ## @@ -45,6 +45,8 @@ import org.apache.commons.lang3.ArrayUtils; import org.apache.hadoop.fs.azurebfs.services.AbfsClient; import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.azurebfs.services.AbfsListStatusRemoteIterator; Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2655: HDFS-15714: HDFS Provided Storage Read/Write Mount Support On-the-fly
hadoop-yetus commented on pull request #2655: URL: https://github.com/apache/hadoop/pull/2655#issuecomment-768192307 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 27s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 4s | | No case conflicting files found. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 79 new or modified test files. | _ HDFS-15714 Compile Tests _ | | +0 :ok: | mvndep | 13m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 50s | | HDFS-15714 passed | | +1 :green_heart: | compile | 21m 54s | | HDFS-15714 passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 18m 22s | | HDFS-15714 passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 4m 9s | | HDFS-15714 passed | | +1 :green_heart: | mvnsite | 6m 3s | | HDFS-15714 passed | | +1 :green_heart: | shadedclient | 27m 51s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 30s | | HDFS-15714 passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 5m 53s | | HDFS-15714 passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 46s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 11m 26s | | HDFS-15714 passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 31s | | the patch passed | | +1 :green_heart: | compile | 21m 7s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | -1 :x: | cc | 21m 7s | [/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 30 new + 142 unchanged - 30 fixed = 172 total (was 172) | | -1 :x: | javac | 21m 7s | [/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 67 new + 2006 unchanged - 27 fixed = 2073 total (was 2033) | | +1 :green_heart: | compile | 22m 21s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | -1 :x: | cc | 22m 21s | [/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 33 new + 139 unchanged - 33 fixed = 172 total (was 172) | | -1 :x: | javac | 22m 21s | [/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 67 new + 1901 unchanged - 27 fixed = 1968 total (was 1928) | | -0 :warning: | checkstyle | 4m 43s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 144 new + 4280 unchanged - 35 fixed = 4424 total (was 4315) | | +1 :green_heart: | mvnsite | 9m 18s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2655/1/artifact/out/whitespace-eol.txt) | The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 7s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 19m 31s | | patch has no errors
[GitHub] [hadoop] steveloughran commented on a change in pull request #2624: MAPREDUCE-7317. Add latency information in FileOutputCommitter.mergePaths.
steveloughran commented on a change in pull request #2624: URL: https://github.com/apache/hadoop/pull/2624#discussion_r565187998 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java ## @@ -455,53 +455,50 @@ protected void commitJobInternal(JobContext context) throws IOException { */ private void mergePaths(FileSystem fs, final FileStatus from, final Path to, JobContext context) throws IOException { -long timeStartNs = -1L; -if (LOG.isDebugEnabled()) { - timeStartNs = System.nanoTime(); - LOG.debug("Merging data from " + from + " to " + to); -} -reportProgress(context); -FileStatus toStat; -try { - toStat = fs.getFileStatus(to); -} catch (FileNotFoundException fnfe) { - toStat = null; -} - -if (from.isFile()) { - if (toStat != null) { -if (!fs.delete(to, true)) { - throw new IOException("Failed to delete " + to); -} +try (DurationInfo d = new DurationInfo(LOG, +false, +"Merged data from %s to %s", from.getPath(), to)) { Review comment: we actually print this *at start* as well as end; end has timings. So you don't need lines 461 & 462 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java ## @@ -455,53 +455,50 @@ protected void commitJobInternal(JobContext context) throws IOException { */ private void mergePaths(FileSystem fs, final FileStatus from, final Path to, JobContext context) throws IOException { -long timeStartNs = -1L; -if (LOG.isDebugEnabled()) { - timeStartNs = System.nanoTime(); - LOG.debug("Merging data from " + from + " to " + to); -} -reportProgress(context); -FileStatus toStat; -try { - toStat = fs.getFileStatus(to); -} catch (FileNotFoundException fnfe) { - toStat = null; -} - -if (from.isFile()) { - if (toStat != null) { -if (!fs.delete(to, true)) { - throw new IOException("Failed to delete " + to); -} +try (DurationInfo d = new DurationInfo(LOG, +false, +"Merged data from %s to %s", from.getPath(), to)) { + if (LOG.isDebugEnabled()) { Review comment: cut these; duplicate now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org